Mar 17 17:25:15.190582 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:25:15.190625 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:25:15.190649 kernel: KASLR disabled due to lack of seed Mar 17 17:25:15.190665 kernel: efi: EFI v2.7 by EDK II Mar 17 17:25:15.190701 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Mar 17 17:25:15.190717 kernel: secureboot: Secure boot disabled Mar 17 17:25:15.190735 kernel: ACPI: Early table checksum verification disabled Mar 17 17:25:15.190750 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:25:15.190766 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:25:15.190781 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:25:15.190802 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:25:15.190818 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:25:15.190833 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:25:15.190848 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:25:15.190866 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:25:15.190886 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:25:15.190903 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:25:15.190919 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:25:15.190935 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:25:15.190951 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:25:15.190967 kernel: printk: bootconsole [uart0] enabled Mar 17 17:25:15.190983 kernel: NUMA: Failed to initialise from firmware Mar 17 17:25:15.190999 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:25:15.191016 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:25:15.191031 kernel: Zone ranges: Mar 17 17:25:15.191048 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:25:15.191067 kernel: DMA32 empty Mar 17 17:25:15.191084 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:25:15.191100 kernel: Movable zone start for each node Mar 17 17:25:15.191117 kernel: Early memory node ranges Mar 17 17:25:15.191132 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:25:15.191148 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:25:15.191164 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:25:15.191203 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:25:15.191219 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:25:15.191235 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:25:15.191251 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:25:15.191267 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:25:15.191289 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:25:15.191306 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:25:15.191329 kernel: psci: probing for conduit method from ACPI. Mar 17 17:25:15.191345 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:25:15.191362 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:25:15.191383 kernel: psci: Trusted OS migration not required Mar 17 17:25:15.191400 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:25:15.191417 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:25:15.191433 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:25:15.191451 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:25:15.191468 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:25:15.191484 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:25:15.191501 kernel: CPU features: detected: Spectre-v2 Mar 17 17:25:15.191518 kernel: CPU features: detected: Spectre-v3a Mar 17 17:25:15.191534 kernel: CPU features: detected: Spectre-BHB Mar 17 17:25:15.191551 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:25:15.191568 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:25:15.191589 kernel: alternatives: applying boot alternatives Mar 17 17:25:15.191607 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:25:15.191626 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:25:15.191643 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:25:15.191660 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:25:15.191677 kernel: Fallback order for Node 0: 0 Mar 17 17:25:15.191693 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:25:15.191710 kernel: Policy zone: Normal Mar 17 17:25:15.191727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:25:15.191744 kernel: software IO TLB: area num 2. Mar 17 17:25:15.191766 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:25:15.191783 kernel: Memory: 3819896K/4030464K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 210568K reserved, 0K cma-reserved) Mar 17 17:25:15.191800 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:25:15.191817 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:25:15.191834 kernel: rcu: RCU event tracing is enabled. Mar 17 17:25:15.191852 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:25:15.191869 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:25:15.191886 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:25:15.191903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:25:15.191920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:25:15.191936 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:25:15.191957 kernel: GICv3: 96 SPIs implemented Mar 17 17:25:15.191974 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:25:15.191991 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:25:15.192007 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:25:15.192024 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:25:15.192041 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:25:15.192058 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:25:15.192075 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:25:15.192091 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:25:15.192108 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:25:15.192125 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:25:15.192142 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:25:15.192163 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:25:15.192208 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:25:15.192226 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:25:15.192243 kernel: Console: colour dummy device 80x25 Mar 17 17:25:15.192261 kernel: printk: console [tty1] enabled Mar 17 17:25:15.192278 kernel: ACPI: Core revision 20230628 Mar 17 17:25:15.192296 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:25:15.192313 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:25:15.192331 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:25:15.192348 kernel: landlock: Up and running. Mar 17 17:25:15.192371 kernel: SELinux: Initializing. Mar 17 17:25:15.192389 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:25:15.192406 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:25:15.192423 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:25:15.192441 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:25:15.192458 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:25:15.192476 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:25:15.192493 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:25:15.192514 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:25:15.192531 kernel: Remapping and enabling EFI services. Mar 17 17:25:15.192548 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:25:15.192565 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:25:15.192582 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:25:15.192600 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:25:15.192617 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:25:15.192634 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:25:15.192651 kernel: SMP: Total of 2 processors activated. Mar 17 17:25:15.192669 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:25:15.192690 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:25:15.192707 kernel: CPU features: detected: CRC32 instructions Mar 17 17:25:15.192735 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:25:15.192757 kernel: alternatives: applying system-wide alternatives Mar 17 17:25:15.192775 kernel: devtmpfs: initialized Mar 17 17:25:15.192793 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:25:15.192811 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:25:15.192829 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:25:15.192847 kernel: SMBIOS 3.0.0 present. Mar 17 17:25:15.192869 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:25:15.192887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:25:15.192905 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:25:15.192923 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:25:15.192941 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:25:15.192959 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:25:15.192977 kernel: audit: type=2000 audit(0.219:1): state=initialized audit_enabled=0 res=1 Mar 17 17:25:15.192999 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:25:15.193017 kernel: cpuidle: using governor menu Mar 17 17:25:15.193035 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:25:15.193053 kernel: ASID allocator initialised with 65536 entries Mar 17 17:25:15.193072 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:25:15.193091 kernel: Serial: AMBA PL011 UART driver Mar 17 17:25:15.193108 kernel: Modules: 17424 pages in range for non-PLT usage Mar 17 17:25:15.193126 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:25:15.193144 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:25:15.193166 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:25:15.193239 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:25:15.193259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:25:15.193277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:25:15.193296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:25:15.193314 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:25:15.193332 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:25:15.193350 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:25:15.193369 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:25:15.193393 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:25:15.193412 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:25:15.193430 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:25:15.193448 kernel: ACPI: Interpreter enabled Mar 17 17:25:15.193465 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:25:15.193483 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:25:15.193502 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:25:15.193785 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:25:15.194006 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:25:15.194240 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:25:15.194451 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:25:15.194658 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:25:15.194702 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:25:15.194722 kernel: acpiphp: Slot [1] registered Mar 17 17:25:15.194740 kernel: acpiphp: Slot [2] registered Mar 17 17:25:15.194758 kernel: acpiphp: Slot [3] registered Mar 17 17:25:15.194784 kernel: acpiphp: Slot [4] registered Mar 17 17:25:15.194803 kernel: acpiphp: Slot [5] registered Mar 17 17:25:15.194821 kernel: acpiphp: Slot [6] registered Mar 17 17:25:15.194839 kernel: acpiphp: Slot [7] registered Mar 17 17:25:15.194857 kernel: acpiphp: Slot [8] registered Mar 17 17:25:15.194875 kernel: acpiphp: Slot [9] registered Mar 17 17:25:15.194893 kernel: acpiphp: Slot [10] registered Mar 17 17:25:15.194910 kernel: acpiphp: Slot [11] registered Mar 17 17:25:15.194929 kernel: acpiphp: Slot [12] registered Mar 17 17:25:15.194947 kernel: acpiphp: Slot [13] registered Mar 17 17:25:15.194969 kernel: acpiphp: Slot [14] registered Mar 17 17:25:15.194987 kernel: acpiphp: Slot [15] registered Mar 17 17:25:15.195005 kernel: acpiphp: Slot [16] registered Mar 17 17:25:15.195023 kernel: acpiphp: Slot [17] registered Mar 17 17:25:15.195042 kernel: acpiphp: Slot [18] registered Mar 17 17:25:15.195060 kernel: acpiphp: Slot [19] registered Mar 17 17:25:15.195078 kernel: acpiphp: Slot [20] registered Mar 17 17:25:15.195096 kernel: acpiphp: Slot [21] registered Mar 17 17:25:15.195114 kernel: acpiphp: Slot [22] registered Mar 17 17:25:15.195136 kernel: acpiphp: Slot [23] registered Mar 17 17:25:15.195154 kernel: acpiphp: Slot [24] registered Mar 17 17:25:15.195214 kernel: acpiphp: Slot [25] registered Mar 17 17:25:15.195235 kernel: acpiphp: Slot [26] registered Mar 17 17:25:15.195253 kernel: acpiphp: Slot [27] registered Mar 17 17:25:15.195271 kernel: acpiphp: Slot [28] registered Mar 17 17:25:15.195289 kernel: acpiphp: Slot [29] registered Mar 17 17:25:15.195308 kernel: acpiphp: Slot [30] registered Mar 17 17:25:15.195326 kernel: acpiphp: Slot [31] registered Mar 17 17:25:15.195344 kernel: PCI host bridge to bus 0000:00 Mar 17 17:25:15.195579 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:25:15.195781 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:25:15.195966 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:25:15.196149 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:25:15.198508 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:25:15.198802 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:25:15.199049 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:25:15.199339 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:25:15.199554 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:25:15.199762 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:25:15.199981 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:25:15.202312 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:25:15.202587 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:25:15.202826 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:25:15.203033 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:25:15.205373 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:25:15.205618 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:25:15.205833 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:25:15.206038 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:25:15.208402 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:25:15.208619 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:25:15.208802 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:25:15.208989 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:25:15.209014 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:25:15.209033 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:25:15.209052 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:25:15.209071 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:25:15.209089 kernel: iommu: Default domain type: Translated Mar 17 17:25:15.209113 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:25:15.209131 kernel: efivars: Registered efivars operations Mar 17 17:25:15.209149 kernel: vgaarb: loaded Mar 17 17:25:15.209182 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:25:15.209206 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:25:15.209226 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:25:15.209244 kernel: pnp: PnP ACPI init Mar 17 17:25:15.209461 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:25:15.209496 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:25:15.209516 kernel: NET: Registered PF_INET protocol family Mar 17 17:25:15.209536 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:25:15.209554 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:25:15.209573 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:25:15.209591 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:25:15.209610 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:25:15.209628 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:25:15.209646 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:25:15.209668 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:25:15.209687 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:25:15.209705 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:25:15.209723 kernel: kvm [1]: HYP mode not available Mar 17 17:25:15.209741 kernel: Initialise system trusted keyrings Mar 17 17:25:15.209759 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:25:15.209777 kernel: Key type asymmetric registered Mar 17 17:25:15.209796 kernel: Asymmetric key parser 'x509' registered Mar 17 17:25:15.209814 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:25:15.209836 kernel: io scheduler mq-deadline registered Mar 17 17:25:15.209854 kernel: io scheduler kyber registered Mar 17 17:25:15.209872 kernel: io scheduler bfq registered Mar 17 17:25:15.210100 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:25:15.210127 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:25:15.210146 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:25:15.210164 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:25:15.211287 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:25:15.211318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:25:15.211339 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:25:15.211609 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:25:15.211636 kernel: printk: console [ttyS0] disabled Mar 17 17:25:15.211655 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:25:15.211674 kernel: printk: console [ttyS0] enabled Mar 17 17:25:15.211693 kernel: printk: bootconsole [uart0] disabled Mar 17 17:25:15.211712 kernel: thunder_xcv, ver 1.0 Mar 17 17:25:15.211730 kernel: thunder_bgx, ver 1.0 Mar 17 17:25:15.211754 kernel: nicpf, ver 1.0 Mar 17 17:25:15.211789 kernel: nicvf, ver 1.0 Mar 17 17:25:15.212012 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:25:15.212257 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:25:14 UTC (1742232314) Mar 17 17:25:15.212284 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:25:15.212303 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:25:15.212321 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:25:15.212340 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:25:15.212364 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:25:15.212382 kernel: Segment Routing with IPv6 Mar 17 17:25:15.212400 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:25:15.212418 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:25:15.212436 kernel: Key type dns_resolver registered Mar 17 17:25:15.212454 kernel: registered taskstats version 1 Mar 17 17:25:15.212472 kernel: Loading compiled-in X.509 certificates Mar 17 17:25:15.212491 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:25:15.212509 kernel: Key type .fscrypt registered Mar 17 17:25:15.212527 kernel: Key type fscrypt-provisioning registered Mar 17 17:25:15.212550 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:25:15.212568 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:25:15.212586 kernel: ima: No architecture policies found Mar 17 17:25:15.212604 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:25:15.212622 kernel: clk: Disabling unused clocks Mar 17 17:25:15.212640 kernel: Freeing unused kernel memory: 39744K Mar 17 17:25:15.212658 kernel: Run /init as init process Mar 17 17:25:15.212676 kernel: with arguments: Mar 17 17:25:15.212693 kernel: /init Mar 17 17:25:15.212715 kernel: with environment: Mar 17 17:25:15.212733 kernel: HOME=/ Mar 17 17:25:15.212751 kernel: TERM=linux Mar 17 17:25:15.212768 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:25:15.212791 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:25:15.212814 systemd[1]: Detected virtualization amazon. Mar 17 17:25:15.212834 systemd[1]: Detected architecture arm64. Mar 17 17:25:15.212858 systemd[1]: Running in initrd. Mar 17 17:25:15.212878 systemd[1]: No hostname configured, using default hostname. Mar 17 17:25:15.212897 systemd[1]: Hostname set to . Mar 17 17:25:15.212917 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:25:15.212936 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:25:15.212956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:15.212975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:15.212995 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:25:15.213022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:25:15.213043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:25:15.213063 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:25:15.213086 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:25:15.213106 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:25:15.213126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:15.213146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:15.215651 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:25:15.215689 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:25:15.215711 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:25:15.215738 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:25:15.215759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:25:15.215779 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:25:15.215799 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:25:15.215819 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:25:15.215839 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:15.215867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:15.215887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:15.215906 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:25:15.215926 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:25:15.215945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:25:15.215965 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:25:15.215984 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:25:15.216004 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:25:15.216028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:25:15.216047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:15.216067 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:25:15.216087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:15.216107 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:25:15.216128 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:25:15.216152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:15.216264 systemd-journald[252]: Collecting audit messages is disabled. Mar 17 17:25:15.216313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:25:15.216341 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:25:15.216360 systemd-journald[252]: Journal started Mar 17 17:25:15.216397 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2c1f06b49162997c4adda518e6484d) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:25:15.173784 systemd-modules-load[253]: Inserted module 'overlay' Mar 17 17:25:15.223314 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:25:15.223363 kernel: Bridge firewalling registered Mar 17 17:25:15.219340 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 17 17:25:15.224503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:25:15.233387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:15.257441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:15.264681 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:25:15.279467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:25:15.286941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:15.295566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:15.307615 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:25:15.313934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:15.337284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:15.360611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:25:15.366439 dracut-cmdline[283]: dracut-dracut-053 Mar 17 17:25:15.373570 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:25:15.441949 systemd-resolved[292]: Positive Trust Anchors: Mar 17 17:25:15.442009 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:25:15.442071 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:25:15.542215 kernel: SCSI subsystem initialized Mar 17 17:25:15.549207 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:25:15.562223 kernel: iscsi: registered transport (tcp) Mar 17 17:25:15.583859 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:25:15.583965 kernel: QLogic iSCSI HBA Driver Mar 17 17:25:15.659213 kernel: random: crng init done Mar 17 17:25:15.659567 systemd-resolved[292]: Defaulting to hostname 'linux'. Mar 17 17:25:15.663113 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:25:15.667071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:15.690908 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:25:15.700473 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:25:15.734053 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:25:15.734166 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:25:15.734218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:25:15.800216 kernel: raid6: neonx8 gen() 6717 MB/s Mar 17 17:25:15.817202 kernel: raid6: neonx4 gen() 6541 MB/s Mar 17 17:25:15.834203 kernel: raid6: neonx2 gen() 5460 MB/s Mar 17 17:25:15.851202 kernel: raid6: neonx1 gen() 3947 MB/s Mar 17 17:25:15.868203 kernel: raid6: int64x8 gen() 3805 MB/s Mar 17 17:25:15.885203 kernel: raid6: int64x4 gen() 3723 MB/s Mar 17 17:25:15.902202 kernel: raid6: int64x2 gen() 3606 MB/s Mar 17 17:25:15.919996 kernel: raid6: int64x1 gen() 2774 MB/s Mar 17 17:25:15.920033 kernel: raid6: using algorithm neonx8 gen() 6717 MB/s Mar 17 17:25:15.937972 kernel: raid6: .... xor() 4880 MB/s, rmw enabled Mar 17 17:25:15.938010 kernel: raid6: using neon recovery algorithm Mar 17 17:25:15.946357 kernel: xor: measuring software checksum speed Mar 17 17:25:15.946407 kernel: 8regs : 10966 MB/sec Mar 17 17:25:15.947463 kernel: 32regs : 11944 MB/sec Mar 17 17:25:15.948637 kernel: arm64_neon : 9509 MB/sec Mar 17 17:25:15.948668 kernel: xor: using function: 32regs (11944 MB/sec) Mar 17 17:25:16.032216 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:25:16.050471 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:25:16.060481 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:16.099817 systemd-udevd[470]: Using default interface naming scheme 'v255'. Mar 17 17:25:16.108338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:16.129064 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:25:16.171533 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Mar 17 17:25:16.227360 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:25:16.237480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:25:16.360711 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:25:16.378456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:25:16.427244 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:25:16.438157 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:25:16.444389 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:25:16.447839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:25:16.461505 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:25:16.503995 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:25:16.548919 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:25:16.548987 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:25:16.600236 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:25:16.600522 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:25:16.600751 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:25:16.600778 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:25:16.601015 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:05:43:41:c3:bd Mar 17 17:25:16.601274 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:25:16.565226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:25:16.565347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:16.569546 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:25:16.571754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:25:16.571905 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:16.574144 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:16.621680 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:25:16.621714 kernel: GPT:9289727 != 16777215 Mar 17 17:25:16.621739 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:25:16.621764 kernel: GPT:9289727 != 16777215 Mar 17 17:25:16.621788 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:25:16.621812 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:25:16.593460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:16.627422 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:16.654944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:16.664706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:25:16.706596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:16.751554 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (523) Mar 17 17:25:16.751623 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (531) Mar 17 17:25:16.828283 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:25:16.862616 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:25:16.878080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:25:16.891981 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:25:16.892131 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:25:16.915534 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:25:16.927980 disk-uuid[661]: Primary Header is updated. Mar 17 17:25:16.927980 disk-uuid[661]: Secondary Entries is updated. Mar 17 17:25:16.927980 disk-uuid[661]: Secondary Header is updated. Mar 17 17:25:16.941519 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:25:16.948212 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:25:17.951215 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:25:17.952098 disk-uuid[662]: The operation has completed successfully. Mar 17 17:25:18.131369 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:25:18.131566 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:25:18.178484 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:25:18.186388 sh[922]: Success Mar 17 17:25:18.204204 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:25:18.301922 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:25:18.312388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:25:18.328893 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:25:18.356210 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:25:18.356272 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:18.356298 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:25:18.357891 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:25:18.357924 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:25:18.442223 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:25:18.476809 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:25:18.480660 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:25:18.490499 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:25:18.496497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:25:18.525383 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:18.525466 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:18.525497 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:25:18.535272 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:25:18.552005 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:25:18.556219 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:18.566960 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:25:18.577611 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:25:18.684334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:25:18.703676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:25:18.756472 systemd-networkd[1115]: lo: Link UP Mar 17 17:25:18.756493 systemd-networkd[1115]: lo: Gained carrier Mar 17 17:25:18.759871 systemd-networkd[1115]: Enumeration completed Mar 17 17:25:18.761992 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:25:18.762059 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:18.762066 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:25:18.773978 systemd[1]: Reached target network.target - Network. Mar 17 17:25:18.774742 systemd-networkd[1115]: eth0: Link UP Mar 17 17:25:18.774750 systemd-networkd[1115]: eth0: Gained carrier Mar 17 17:25:18.774767 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:18.808267 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.28.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:25:18.938444 ignition[1025]: Ignition 2.20.0 Mar 17 17:25:18.938482 ignition[1025]: Stage: fetch-offline Mar 17 17:25:18.938935 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:18.938959 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:18.940459 ignition[1025]: Ignition finished successfully Mar 17 17:25:18.948768 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:25:18.958485 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:25:18.990936 ignition[1126]: Ignition 2.20.0 Mar 17 17:25:18.990964 ignition[1126]: Stage: fetch Mar 17 17:25:18.992033 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:18.992058 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:18.992313 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:19.013072 ignition[1126]: PUT result: OK Mar 17 17:25:19.017351 ignition[1126]: parsed url from cmdline: "" Mar 17 17:25:19.017477 ignition[1126]: no config URL provided Mar 17 17:25:19.017496 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:25:19.017522 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:25:19.018634 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:19.022837 ignition[1126]: PUT result: OK Mar 17 17:25:19.022921 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:25:19.028357 ignition[1126]: GET result: OK Mar 17 17:25:19.028445 ignition[1126]: parsing config with SHA512: b30a8687a2a9f4debfb5185bab4ed4ac91cdf81b409c70b1f915077bf16647d111c9d41f85902335a73295f613fae57c8cb3e8d2773b75d21a9ccf611cf94146 Mar 17 17:25:19.035458 unknown[1126]: fetched base config from "system" Mar 17 17:25:19.035687 unknown[1126]: fetched base config from "system" Mar 17 17:25:19.036125 ignition[1126]: fetch: fetch complete Mar 17 17:25:19.035702 unknown[1126]: fetched user config from "aws" Mar 17 17:25:19.036138 ignition[1126]: fetch: fetch passed Mar 17 17:25:19.043461 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:25:19.036242 ignition[1126]: Ignition finished successfully Mar 17 17:25:19.057154 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:25:19.081727 ignition[1132]: Ignition 2.20.0 Mar 17 17:25:19.081753 ignition[1132]: Stage: kargs Mar 17 17:25:19.082616 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:19.082643 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:19.082895 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:19.084860 ignition[1132]: PUT result: OK Mar 17 17:25:19.094581 ignition[1132]: kargs: kargs passed Mar 17 17:25:19.094701 ignition[1132]: Ignition finished successfully Mar 17 17:25:19.099231 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:25:19.109541 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:25:19.137101 ignition[1138]: Ignition 2.20.0 Mar 17 17:25:19.137130 ignition[1138]: Stage: disks Mar 17 17:25:19.137965 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:19.137991 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:19.138151 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:19.140373 ignition[1138]: PUT result: OK Mar 17 17:25:19.149244 ignition[1138]: disks: disks passed Mar 17 17:25:19.149335 ignition[1138]: Ignition finished successfully Mar 17 17:25:19.154231 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:25:19.157326 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:25:19.162324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:25:19.164877 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:25:19.166850 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:25:19.171148 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:25:19.188568 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:25:19.231309 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:25:19.235432 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:25:19.247503 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:25:19.345204 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:25:19.346838 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:25:19.350713 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:25:19.371536 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:25:19.377125 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:25:19.381520 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:25:19.384929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:25:19.384992 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:25:19.401211 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Mar 17 17:25:19.407445 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:19.407502 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:19.407528 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:25:19.408399 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:25:19.418543 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:25:19.427224 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:25:19.430522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:25:19.883853 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:25:19.903203 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:25:19.911858 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:25:19.920269 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:25:20.225909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:25:20.230460 systemd-networkd[1115]: eth0: Gained IPv6LL Mar 17 17:25:20.235341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:25:20.245567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:25:20.265460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:25:20.269196 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:20.310136 ignition[1278]: INFO : Ignition 2.20.0 Mar 17 17:25:20.310136 ignition[1278]: INFO : Stage: mount Mar 17 17:25:20.309923 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:25:20.317027 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:20.317027 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:20.317027 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:20.317027 ignition[1278]: INFO : PUT result: OK Mar 17 17:25:20.327014 ignition[1278]: INFO : mount: mount passed Mar 17 17:25:20.328583 ignition[1278]: INFO : Ignition finished successfully Mar 17 17:25:20.332237 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:25:20.340365 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:25:20.361530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:25:20.382212 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Mar 17 17:25:20.385738 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:20.385783 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:20.385808 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:25:20.392212 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:25:20.395280 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:25:20.428264 ignition[1306]: INFO : Ignition 2.20.0 Mar 17 17:25:20.428264 ignition[1306]: INFO : Stage: files Mar 17 17:25:20.431474 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:20.431474 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:20.431474 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:20.438253 ignition[1306]: INFO : PUT result: OK Mar 17 17:25:20.442584 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:25:20.444953 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:25:20.444953 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:25:20.474406 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:25:20.477233 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:25:20.480276 unknown[1306]: wrote ssh authorized keys file for user: core Mar 17 17:25:20.484375 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:25:20.488712 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:25:20.834843 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:25:21.193883 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:25:21.197975 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:25:21.197975 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:25:21.197975 ignition[1306]: INFO : files: files passed Mar 17 17:25:21.197975 ignition[1306]: INFO : Ignition finished successfully Mar 17 17:25:21.210244 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:25:21.219547 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:25:21.224548 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:25:21.247033 initrd-setup-root-after-ignition[1332]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:21.247033 initrd-setup-root-after-ignition[1332]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:21.257106 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:21.259908 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:25:21.265990 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:25:21.284553 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:25:21.285774 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:25:21.285954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:25:21.340980 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:25:21.341862 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:25:21.347893 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:25:21.350053 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:25:21.353663 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:25:21.369584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:25:21.396461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:25:21.410543 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:25:21.435751 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:21.440158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:25:21.444628 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:25:21.446488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:25:21.446735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:25:21.449460 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:25:21.458193 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:25:21.460027 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:25:21.462188 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:25:21.464504 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:25:21.466754 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:25:21.468850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:25:21.485501 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:25:21.489157 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:25:21.491416 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:25:21.494806 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:25:21.495026 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:25:21.501057 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:21.503270 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:21.505983 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:25:21.510206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:21.512700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:25:21.513037 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:25:21.534097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:25:21.534549 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:25:21.541837 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:25:21.542790 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:25:21.558164 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:25:21.566527 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:25:21.572102 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:25:21.574771 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:25:21.579618 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:25:21.582279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:25:21.602693 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:25:21.604676 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:25:21.611683 ignition[1358]: INFO : Ignition 2.20.0 Mar 17 17:25:21.611683 ignition[1358]: INFO : Stage: umount Mar 17 17:25:21.611683 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:21.611683 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:25:21.611683 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:25:21.621264 ignition[1358]: INFO : PUT result: OK Mar 17 17:25:21.624311 ignition[1358]: INFO : umount: umount passed Mar 17 17:25:21.625960 ignition[1358]: INFO : Ignition finished successfully Mar 17 17:25:21.630162 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:25:21.631080 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:25:21.637920 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:25:21.638031 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:25:21.640049 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:25:21.640133 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:25:21.640410 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:25:21.640486 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:25:21.640642 systemd[1]: Stopped target network.target - Network. Mar 17 17:25:21.640923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:25:21.640997 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:25:21.641630 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:25:21.641887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:25:21.648771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:21.648891 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:25:21.648946 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:25:21.649057 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:25:21.649131 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:25:21.649269 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:25:21.649338 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:25:21.649429 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:25:21.649507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:25:21.649609 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:25:21.649684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:25:21.650001 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:25:21.650241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:25:21.685277 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:25:21.685578 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:25:21.687272 systemd-networkd[1115]: eth0: DHCPv6 lease lost Mar 17 17:25:21.705799 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:25:21.706894 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:25:21.707450 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:25:21.714034 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:25:21.714149 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:21.750018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:25:21.752401 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:25:21.752524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:25:21.755676 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:25:21.756698 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:21.758994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:25:21.759082 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:21.766157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:25:21.766267 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:21.766523 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:21.811260 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:25:21.811729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:21.819703 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:25:21.819812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:21.822104 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:25:21.822194 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:21.823494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:25:21.823581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:25:21.829340 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:25:21.829435 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:25:21.831453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:25:21.831533 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:21.855527 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:25:21.858337 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:25:21.858459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:21.860896 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:25:21.860994 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:25:21.865989 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:25:21.866088 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:21.871960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:25:21.872047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:21.890035 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:25:21.890633 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:25:21.904607 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:25:21.904891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:25:21.917548 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:25:21.917808 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:25:21.927673 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:25:21.930305 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:25:21.930617 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:25:21.948414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:25:21.965793 systemd[1]: Switching root. Mar 17 17:25:22.020030 systemd-journald[252]: Journal stopped Mar 17 17:25:24.347707 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 17 17:25:24.347848 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:25:24.347891 kernel: SELinux: policy capability open_perms=1 Mar 17 17:25:24.347923 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:25:24.347953 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:25:24.347982 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:25:24.348012 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:25:24.348051 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:25:24.348080 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:25:24.348114 kernel: audit: type=1403 audit(1742232322.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:25:24.348153 systemd[1]: Successfully loaded SELinux policy in 68.456ms. Mar 17 17:25:24.348220 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.998ms. Mar 17 17:25:24.348258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:25:24.348291 systemd[1]: Detected virtualization amazon. Mar 17 17:25:24.348323 systemd[1]: Detected architecture arm64. Mar 17 17:25:24.348354 systemd[1]: Detected first boot. Mar 17 17:25:24.348388 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:25:24.348425 zram_generator::config[1401]: No configuration found. Mar 17 17:25:24.348459 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:25:24.348490 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:25:24.348522 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:25:24.348553 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:25:24.348585 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:25:24.348620 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:25:24.348654 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:25:24.348686 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:25:24.348717 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:25:24.348749 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:25:24.348797 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:25:24.348847 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:25:24.348883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:24.348913 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:24.348949 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:25:24.348981 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:25:24.349012 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:25:24.349042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:25:24.349077 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:25:24.349109 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:24.349142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:25:24.354235 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:25:24.354318 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:25:24.354361 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:25:24.354392 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:25:24.354424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:25:24.354454 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:25:24.354485 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:25:24.354546 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:25:24.354587 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:25:24.354623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:24.354673 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:24.354717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:24.354751 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:25:24.354780 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:25:24.354811 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:25:24.354841 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:25:24.354870 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:25:24.354901 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:25:24.354933 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:25:24.354971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:25:24.355001 systemd[1]: Reached target machines.target - Containers. Mar 17 17:25:24.355030 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:25:24.355059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:24.355101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:25:24.355133 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:25:24.355164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:24.355976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:25:24.356017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:24.356047 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:25:24.356077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:24.356108 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:25:24.356138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:25:24.356187 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:25:24.356237 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:25:24.356286 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:25:24.358288 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:25:24.358321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:25:24.358350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:25:24.358382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:25:24.358411 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:25:24.358442 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:25:24.358473 systemd[1]: Stopped verity-setup.service. Mar 17 17:25:24.358503 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:25:24.358532 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:25:24.358566 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:25:24.358594 kernel: loop: module loaded Mar 17 17:25:24.358623 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:25:24.358671 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:25:24.358701 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:25:24.358737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:24.358766 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:25:24.358797 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:25:24.358826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:24.358854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:24.358885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:24.358915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:24.358949 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:24.358978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:24.359011 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:24.359044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:25:24.359074 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:25:24.359105 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:25:24.359147 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:25:24.361554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:25:24.361603 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:25:24.361636 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:25:24.361667 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:25:24.361695 kernel: fuse: init (API version 7.39) Mar 17 17:25:24.361724 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:25:24.361754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:24.361830 systemd-journald[1480]: Collecting audit messages is disabled. Mar 17 17:25:24.361889 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:25:24.361920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:25:24.361949 kernel: ACPI: bus type drm_connector registered Mar 17 17:25:24.361976 systemd-journald[1480]: Journal started Mar 17 17:25:24.362027 systemd-journald[1480]: Runtime Journal (/run/log/journal/ec2c1f06b49162997c4adda518e6484d) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:25:24.368293 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:25:24.368366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:25:23.713576 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:25:23.764439 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:25:23.765219 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:25:24.381012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:24.389797 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:25:24.403226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:25:24.410161 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:25:24.413361 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:25:24.415256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:25:24.418015 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:25:24.420253 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:25:24.422750 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:25:24.426008 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:25:24.482291 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:25:24.498266 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:25:24.511394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:25:24.532675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:25:24.543520 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:25:24.549894 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:25:24.553265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:24.555786 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:25:24.573337 systemd-journald[1480]: Time spent on flushing to /var/log/journal/ec2c1f06b49162997c4adda518e6484d is 33.687ms for 896 entries. Mar 17 17:25:24.573337 systemd-journald[1480]: System Journal (/var/log/journal/ec2c1f06b49162997c4adda518e6484d) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:25:24.621246 systemd-journald[1480]: Received client request to flush runtime journal. Mar 17 17:25:24.621337 kernel: loop0: detected capacity change from 0 to 116808 Mar 17 17:25:24.630341 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:25:24.646585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:25:24.651467 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:25:24.655625 systemd-tmpfiles[1509]: ACLs are not supported, ignoring. Mar 17 17:25:24.655664 systemd-tmpfiles[1509]: ACLs are not supported, ignoring. Mar 17 17:25:24.674859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:25:24.690587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:25:24.699467 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:25:24.736205 kernel: loop1: detected capacity change from 0 to 53784 Mar 17 17:25:24.739935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:25:24.749650 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:25:24.799281 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:25:24.814948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:25:24.817755 udevadm[1551]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:25:24.853280 kernel: loop2: detected capacity change from 0 to 113536 Mar 17 17:25:24.887447 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Mar 17 17:25:24.887988 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Mar 17 17:25:24.902277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:24.987224 kernel: loop3: detected capacity change from 0 to 194096 Mar 17 17:25:25.100315 kernel: loop4: detected capacity change from 0 to 116808 Mar 17 17:25:25.126320 kernel: loop5: detected capacity change from 0 to 53784 Mar 17 17:25:25.136199 kernel: loop6: detected capacity change from 0 to 113536 Mar 17 17:25:25.152222 kernel: loop7: detected capacity change from 0 to 194096 Mar 17 17:25:25.178397 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:25:25.179425 (sd-merge)[1559]: Merged extensions into '/usr'. Mar 17 17:25:25.187083 systemd[1]: Reloading requested from client PID 1508 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:25:25.187109 systemd[1]: Reloading... Mar 17 17:25:25.318274 zram_generator::config[1581]: No configuration found. Mar 17 17:25:25.686379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:25.793263 systemd[1]: Reloading finished in 605 ms. Mar 17 17:25:25.836957 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:25:25.840938 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:25:25.859145 systemd[1]: Starting ensure-sysext.service... Mar 17 17:25:25.871429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:25:25.883551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:25.893408 systemd[1]: Reloading requested from client PID 1637 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:25:25.893443 systemd[1]: Reloading... Mar 17 17:25:26.001444 systemd-udevd[1639]: Using default interface naming scheme 'v255'. Mar 17 17:25:26.011900 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:25:26.012591 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:25:26.018474 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:25:26.019042 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Mar 17 17:25:26.023271 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Mar 17 17:25:26.037376 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:25:26.037403 systemd-tmpfiles[1638]: Skipping /boot Mar 17 17:25:26.052152 ldconfig[1497]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:25:26.111539 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:25:26.111568 systemd-tmpfiles[1638]: Skipping /boot Mar 17 17:25:26.138241 zram_generator::config[1665]: No configuration found. Mar 17 17:25:26.236458 (udev-worker)[1686]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:26.537415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:26.559247 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1669) Mar 17 17:25:26.682463 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:25:26.683107 systemd[1]: Reloading finished in 789 ms. Mar 17 17:25:26.713094 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:26.717379 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:25:26.728275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:26.803442 systemd[1]: Finished ensure-sysext.service. Mar 17 17:25:26.809602 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:25:26.820852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:25:26.833602 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:25:26.845663 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:25:26.848248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:26.852711 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:25:26.863522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:26.873527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:25:26.884824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:26.891521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:26.893692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:26.904668 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:25:26.910503 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:25:26.918504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:25:26.927690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:25:26.929743 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:25:26.938421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:25:26.946548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:26.951203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:26.952271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:26.955797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:25:26.976539 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:25:26.979449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:26.980336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:26.995526 lvm[1836]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:25:27.012010 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:25:27.012381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:25:27.024648 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:27.027626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:27.030377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:25:27.071675 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:25:27.102307 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:25:27.115648 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:25:27.120205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:25:27.140048 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:25:27.140990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:27.145523 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:25:27.176103 lvm[1877]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:25:27.148429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:25:27.149810 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:25:27.204710 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:25:27.218204 augenrules[1883]: No rules Mar 17 17:25:27.219066 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:25:27.221264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:25:27.226267 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:25:27.249288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:27.251911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:25:27.365705 systemd-resolved[1850]: Positive Trust Anchors: Mar 17 17:25:27.365767 systemd-resolved[1850]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:25:27.365831 systemd-resolved[1850]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:25:27.370364 systemd-networkd[1849]: lo: Link UP Mar 17 17:25:27.370868 systemd-networkd[1849]: lo: Gained carrier Mar 17 17:25:27.373406 systemd-resolved[1850]: Defaulting to hostname 'linux'. Mar 17 17:25:27.373875 systemd-networkd[1849]: Enumeration completed Mar 17 17:25:27.374193 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:25:27.376552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:25:27.378762 systemd[1]: Reached target network.target - Network. Mar 17 17:25:27.380458 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:27.382923 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:25:27.385150 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:25:27.388015 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:27.388036 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:25:27.388404 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:25:27.391647 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:25:27.393938 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:25:27.396241 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:25:27.398508 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:25:27.398549 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:25:27.400227 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:25:27.403053 systemd-networkd[1849]: eth0: Link UP Mar 17 17:25:27.403408 systemd-networkd[1849]: eth0: Gained carrier Mar 17 17:25:27.403443 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:27.403754 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:25:27.408727 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:25:27.417518 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:25:27.421263 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.28.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:25:27.423411 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:25:27.426509 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:25:27.428762 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:25:27.430689 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:25:27.432512 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:25:27.432561 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:25:27.444597 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:25:27.459218 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:25:27.468624 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:25:27.478447 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:25:27.485971 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:25:27.489381 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:25:27.491439 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:25:27.504223 jq[1905]: false Mar 17 17:25:27.505548 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:25:27.510643 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:25:27.516458 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:25:27.522873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:25:27.534186 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:25:27.536980 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:25:27.539084 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:25:27.543577 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:25:27.547941 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:25:27.553084 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:25:27.554297 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:25:27.584806 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:25:27.588336 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:25:27.592591 jq[1914]: true Mar 17 17:25:27.642217 jq[1923]: true Mar 17 17:25:27.693995 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:25:27.696269 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:25:27.707963 (ntainerd)[1939]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:25:27.710882 dbus-daemon[1904]: [system] SELinux support is enabled Mar 17 17:25:27.711159 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:25:27.717337 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:25:27.726510 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:25:27.717397 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:25:27.719892 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:25:27.719927 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:25:27.748542 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:25:27.761578 extend-filesystems[1906]: Found loop4 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found loop5 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found loop6 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found loop7 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p1 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p2 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p3 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found usr Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p4 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p6 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p7 Mar 17 17:25:27.761578 extend-filesystems[1906]: Found nvme0n1p9 Mar 17 17:25:27.761578 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 Mar 17 17:25:27.798834 update_engine[1913]: I20250317 17:25:27.768121 1913 main.cc:92] Flatcar Update Engine starting Mar 17 17:25:27.811491 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:25:27.814657 update_engine[1913]: I20250317 17:25:27.814363 1913 update_check_scheduler.cc:74] Next update check in 6m36s Mar 17 17:25:27.825818 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:25:27.843320 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: ---------------------------------------------------- Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: corporation. Support and training for ntp-4 are Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: available at https://www.nwtime.org/support Mar 17 17:25:27.844041 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: ---------------------------------------------------- Mar 17 17:25:27.843383 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:25:27.843402 ntpd[1908]: ---------------------------------------------------- Mar 17 17:25:27.843421 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:25:27.843443 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:25:27.843461 ntpd[1908]: corporation. Support and training for ntp-4 are Mar 17 17:25:27.843479 ntpd[1908]: available at https://www.nwtime.org/support Mar 17 17:25:27.843497 ntpd[1908]: ---------------------------------------------------- Mar 17 17:25:27.860246 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:25:27.865722 ntpd[1908]: proto: precision = 0.096 usec (-23) Mar 17 17:25:27.865912 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: proto: precision = 0.096 usec (-23) Mar 17 17:25:27.866158 ntpd[1908]: basedate set to 2025-03-05 Mar 17 17:25:27.866210 ntpd[1908]: gps base set to 2025-03-09 (week 2357) Mar 17 17:25:27.866306 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: basedate set to 2025-03-05 Mar 17 17:25:27.866306 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: gps base set to 2025-03-09 (week 2357) Mar 17 17:25:27.881922 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:25:27.884341 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:25:27.884341 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:25:27.882012 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:25:27.893359 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Listen normally on 3 eth0 172.31.28.142:123 Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Listen normally on 4 lo [::1]:123 Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: bind(21) AF_INET6 fe80::405:43ff:fe41:c3bd%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: unable to create socket on eth0 (5) for fe80::405:43ff:fe41:c3bd%2#123 Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: failed to init interface for address fe80::405:43ff:fe41:c3bd%2 Mar 17 17:25:27.895426 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Mar 17 17:25:27.893445 ntpd[1908]: Listen normally on 3 eth0 172.31.28.142:123 Mar 17 17:25:27.893510 ntpd[1908]: Listen normally on 4 lo [::1]:123 Mar 17 17:25:27.893593 ntpd[1908]: bind(21) AF_INET6 fe80::405:43ff:fe41:c3bd%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:25:27.893630 ntpd[1908]: unable to create socket on eth0 (5) for fe80::405:43ff:fe41:c3bd%2#123 Mar 17 17:25:27.893658 ntpd[1908]: failed to init interface for address fe80::405:43ff:fe41:c3bd%2 Mar 17 17:25:27.893714 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Mar 17 17:25:27.913243 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 Mar 17 17:25:27.915301 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:27.915301 ntpd[1908]: 17 Mar 17:25:27 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:27.910710 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:27.910761 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:27.924642 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:25:27.944688 systemd-logind[1912]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:25:27.944724 systemd-logind[1912]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:25:27.947578 systemd-logind[1912]: New seat seat0. Mar 17 17:25:27.951252 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:25:27.952503 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:25:27.965277 bash[1968]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:25:28.001940 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:25:28.022925 systemd[1]: Starting sshkeys.service... Mar 17 17:25:28.033377 coreos-metadata[1903]: Mar 17 17:25:28.033 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:25:28.037466 coreos-metadata[1903]: Mar 17 17:25:28.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:25:28.040770 coreos-metadata[1903]: Mar 17 17:25:28.040 INFO Fetch successful Mar 17 17:25:28.040770 coreos-metadata[1903]: Mar 17 17:25:28.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:25:28.042585 coreos-metadata[1903]: Mar 17 17:25:28.042 INFO Fetch successful Mar 17 17:25:28.042585 coreos-metadata[1903]: Mar 17 17:25:28.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:25:28.049208 coreos-metadata[1903]: Mar 17 17:25:28.047 INFO Fetch successful Mar 17 17:25:28.049208 coreos-metadata[1903]: Mar 17 17:25:28.047 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:25:28.050496 coreos-metadata[1903]: Mar 17 17:25:28.050 INFO Fetch successful Mar 17 17:25:28.050496 coreos-metadata[1903]: Mar 17 17:25:28.050 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:25:28.051632 coreos-metadata[1903]: Mar 17 17:25:28.051 INFO Fetch failed with 404: resource not found Mar 17 17:25:28.051632 coreos-metadata[1903]: Mar 17 17:25:28.051 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:25:28.053194 coreos-metadata[1903]: Mar 17 17:25:28.052 INFO Fetch successful Mar 17 17:25:28.053194 coreos-metadata[1903]: Mar 17 17:25:28.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:25:28.055463 coreos-metadata[1903]: Mar 17 17:25:28.055 INFO Fetch successful Mar 17 17:25:28.055893 coreos-metadata[1903]: Mar 17 17:25:28.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:25:28.056487 coreos-metadata[1903]: Mar 17 17:25:28.056 INFO Fetch successful Mar 17 17:25:28.056487 coreos-metadata[1903]: Mar 17 17:25:28.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:25:28.057351 coreos-metadata[1903]: Mar 17 17:25:28.057 INFO Fetch successful Mar 17 17:25:28.057764 coreos-metadata[1903]: Mar 17 17:25:28.057 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:25:28.061274 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:25:28.099565 coreos-metadata[1903]: Mar 17 17:25:28.058 INFO Fetch successful Mar 17 17:25:28.118013 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:25:28.118013 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:25:28.118013 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:25:28.110800 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:25:28.125894 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:25:28.114281 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:25:28.152539 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:25:28.165824 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:25:28.198016 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1686) Mar 17 17:25:28.212845 locksmithd[1952]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:25:28.222781 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:25:28.227253 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:25:28.257209 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:25:28.263434 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:25:28.263705 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:25:28.276496 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1943 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:25:28.287103 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:25:28.369871 polkitd[2033]: Started polkitd version 121 Mar 17 17:25:28.406486 polkitd[2033]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:25:28.406628 polkitd[2033]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:25:28.414704 coreos-metadata[1992]: Mar 17 17:25:28.414 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:25:28.415602 polkitd[2033]: Finished loading, compiling and executing 2 rules Mar 17 17:25:28.419646 coreos-metadata[1992]: Mar 17 17:25:28.416 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:25:28.417002 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:25:28.416748 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:25:28.422424 coreos-metadata[1992]: Mar 17 17:25:28.421 INFO Fetch successful Mar 17 17:25:28.422424 coreos-metadata[1992]: Mar 17 17:25:28.422 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:25:28.422279 polkitd[2033]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:25:28.423023 coreos-metadata[1992]: Mar 17 17:25:28.422 INFO Fetch successful Mar 17 17:25:28.429709 unknown[1992]: wrote ssh authorized keys file for user: core Mar 17 17:25:28.479231 update-ssh-keys[2070]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:25:28.482227 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:25:28.491907 systemd[1]: Finished sshkeys.service. Mar 17 17:25:28.511456 systemd-hostnamed[1943]: Hostname set to (transient) Mar 17 17:25:28.511608 systemd-resolved[1850]: System hostname changed to 'ip-172-31-28-142'. Mar 17 17:25:28.641314 containerd[1939]: time="2025-03-17T17:25:28.640013566Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:25:28.700118 containerd[1939]: time="2025-03-17T17:25:28.700002298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703215 containerd[1939]: time="2025-03-17T17:25:28.702735766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703215 containerd[1939]: time="2025-03-17T17:25:28.702801250Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:25:28.703215 containerd[1939]: time="2025-03-17T17:25:28.702836398Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:25:28.703215 containerd[1939]: time="2025-03-17T17:25:28.703127794Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:25:28.703215 containerd[1939]: time="2025-03-17T17:25:28.703162246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703494 containerd[1939]: time="2025-03-17T17:25:28.703324258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703494 containerd[1939]: time="2025-03-17T17:25:28.703353970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703677 containerd[1939]: time="2025-03-17T17:25:28.703631290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703731 containerd[1939]: time="2025-03-17T17:25:28.703671334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703731 containerd[1939]: time="2025-03-17T17:25:28.703704190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:28.703825 containerd[1939]: time="2025-03-17T17:25:28.703727866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.704266 containerd[1939]: time="2025-03-17T17:25:28.703909678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.704446 containerd[1939]: time="2025-03-17T17:25:28.704405494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:28.705190 containerd[1939]: time="2025-03-17T17:25:28.704618470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:28.705190 containerd[1939]: time="2025-03-17T17:25:28.704658166Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:25:28.705190 containerd[1939]: time="2025-03-17T17:25:28.704827090Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:25:28.705190 containerd[1939]: time="2025-03-17T17:25:28.704923966Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:25:28.714698 containerd[1939]: time="2025-03-17T17:25:28.714625918Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:25:28.714817 containerd[1939]: time="2025-03-17T17:25:28.714708694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:25:28.714817 containerd[1939]: time="2025-03-17T17:25:28.714743890Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:25:28.714817 containerd[1939]: time="2025-03-17T17:25:28.714778810Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:25:28.714817 containerd[1939]: time="2025-03-17T17:25:28.714811546Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:25:28.715268 containerd[1939]: time="2025-03-17T17:25:28.715061374Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:25:28.715567 containerd[1939]: time="2025-03-17T17:25:28.715498054Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:25:28.715830 containerd[1939]: time="2025-03-17T17:25:28.715691434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:25:28.715830 containerd[1939]: time="2025-03-17T17:25:28.715732690Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:25:28.715830 containerd[1939]: time="2025-03-17T17:25:28.715768906Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:25:28.715830 containerd[1939]: time="2025-03-17T17:25:28.715800622Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.715830538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.715863142Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.715893790Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.715926706Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.715957990Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.715986634Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716022382Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716068294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716100850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716130730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716161306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716275330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716308522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.716369 containerd[1939]: time="2025-03-17T17:25:28.716336542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716370706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716403118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716436946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716465110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716495566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716537614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716570290Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716611426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716650210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716676826Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716823046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716963350Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:25:28.718166 containerd[1939]: time="2025-03-17T17:25:28.716993794Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:25:28.719804 containerd[1939]: time="2025-03-17T17:25:28.717025678Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:25:28.719804 containerd[1939]: time="2025-03-17T17:25:28.717049282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.719804 containerd[1939]: time="2025-03-17T17:25:28.717078274Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:25:28.719804 containerd[1939]: time="2025-03-17T17:25:28.717101194Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:25:28.719804 containerd[1939]: time="2025-03-17T17:25:28.717127726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.717647998Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.717735046Z" level=info msg="Connect containerd service" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.717793702Z" level=info msg="using legacy CRI server" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.717812122Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.718042126Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719154898Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719378758Z" level=info msg="Start subscribing containerd event" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719443342Z" level=info msg="Start recovering state" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719552950Z" level=info msg="Start event monitor" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719575126Z" level=info msg="Start snapshots syncer" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719594674Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:25:28.720820 containerd[1939]: time="2025-03-17T17:25:28.719613550Z" level=info msg="Start streaming server" Mar 17 17:25:28.721514 containerd[1939]: time="2025-03-17T17:25:28.721063534Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:25:28.725141 containerd[1939]: time="2025-03-17T17:25:28.723787690Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:25:28.725141 containerd[1939]: time="2025-03-17T17:25:28.723981190Z" level=info msg="containerd successfully booted in 0.085370s" Mar 17 17:25:28.724112 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:25:28.844060 ntpd[1908]: bind(24) AF_INET6 fe80::405:43ff:fe41:c3bd%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:25:28.844609 ntpd[1908]: 17 Mar 17:25:28 ntpd[1908]: bind(24) AF_INET6 fe80::405:43ff:fe41:c3bd%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:25:28.844609 ntpd[1908]: 17 Mar 17:25:28 ntpd[1908]: unable to create socket on eth0 (6) for fe80::405:43ff:fe41:c3bd%2#123 Mar 17 17:25:28.844609 ntpd[1908]: 17 Mar 17:25:28 ntpd[1908]: failed to init interface for address fe80::405:43ff:fe41:c3bd%2 Mar 17 17:25:28.844121 ntpd[1908]: unable to create socket on eth0 (6) for fe80::405:43ff:fe41:c3bd%2#123 Mar 17 17:25:28.844151 ntpd[1908]: failed to init interface for address fe80::405:43ff:fe41:c3bd%2 Mar 17 17:25:28.870372 systemd-networkd[1849]: eth0: Gained IPv6LL Mar 17 17:25:28.875948 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:25:28.882492 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:25:28.894642 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:25:28.910559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:28.917694 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:25:29.003520 amazon-ssm-agent[2106]: Initializing new seelog logger Mar 17 17:25:29.006853 amazon-ssm-agent[2106]: New Seelog Logger Creation Complete Mar 17 17:25:29.007296 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.007296 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.007749 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 processing appconfig overrides Mar 17 17:25:29.008292 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.009756 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO Proxy environment variables: Mar 17 17:25:29.012576 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.012911 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 processing appconfig overrides Mar 17 17:25:29.013078 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.013078 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.013312 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 processing appconfig overrides Mar 17 17:25:29.022193 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.022193 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:29.022193 amazon-ssm-agent[2106]: 2025/03/17 17:25:29 processing appconfig overrides Mar 17 17:25:29.093509 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:25:29.112424 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO https_proxy: Mar 17 17:25:29.210263 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO http_proxy: Mar 17 17:25:29.308449 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO no_proxy: Mar 17 17:25:29.406702 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:25:29.504938 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:25:29.604418 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO Agent will take identity from EC2 Mar 17 17:25:29.705192 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:25:29.802715 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:25:29.827768 sshd_keygen[1941]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:25:29.874417 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:25:29.888843 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:25:29.900678 systemd[1]: Started sshd@0-172.31.28.142:22-139.178.68.195:44978.service - OpenSSH per-connection server daemon (139.178.68.195:44978). Mar 17 17:25:29.904235 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:25:29.930879 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:25:29.932451 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:25:29.946690 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:25:29.997410 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:25:30.002005 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:25:30.009784 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:25:30.022779 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:25:30.026678 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:25:30.106656 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:25:30.175087 sshd[2134]: Accepted publickey for core from 139.178.68.195 port 44978 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:30.179040 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:30.196927 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:25:30.207308 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:25:30.207691 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [Registrar] Starting registrar module Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:29 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:30 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:30 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:30 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:25:30.213571 amazon-ssm-agent[2106]: 2025-03-17 17:25:30 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:25:30.218818 systemd-logind[1912]: New session 1 of user core. Mar 17 17:25:30.244711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:25:30.258712 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:25:30.273581 (systemd)[2145]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:25:30.305917 amazon-ssm-agent[2106]: 2025-03-17 17:25:30 INFO [CredentialRefresher] Next credential rotation will be in 31.883280162433334 minutes Mar 17 17:25:30.490588 systemd[2145]: Queued start job for default target default.target. Mar 17 17:25:30.499508 systemd[2145]: Created slice app.slice - User Application Slice. Mar 17 17:25:30.499573 systemd[2145]: Reached target paths.target - Paths. Mar 17 17:25:30.499606 systemd[2145]: Reached target timers.target - Timers. Mar 17 17:25:30.502033 systemd[2145]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:25:30.526514 systemd[2145]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:25:30.526643 systemd[2145]: Reached target sockets.target - Sockets. Mar 17 17:25:30.526677 systemd[2145]: Reached target basic.target - Basic System. Mar 17 17:25:30.526758 systemd[2145]: Reached target default.target - Main User Target. Mar 17 17:25:30.526820 systemd[2145]: Startup finished in 241ms. Mar 17 17:25:30.526993 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:25:30.543504 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:25:30.707890 systemd[1]: Started sshd@1-172.31.28.142:22-139.178.68.195:47668.service - OpenSSH per-connection server daemon (139.178.68.195:47668). Mar 17 17:25:30.902586 sshd[2156]: Accepted publickey for core from 139.178.68.195 port 47668 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:30.905635 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:30.916161 systemd-logind[1912]: New session 2 of user core. Mar 17 17:25:30.927431 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:25:31.004442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:31.005884 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:31.008674 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:25:31.011640 systemd[1]: Startup finished in 1.077s (kernel) + 7.686s (initrd) + 8.576s (userspace) = 17.340s. Mar 17 17:25:31.066210 sshd[2158]: Connection closed by 139.178.68.195 port 47668 Mar 17 17:25:31.066126 sshd-session[2156]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:31.072161 systemd[1]: sshd@1-172.31.28.142:22-139.178.68.195:47668.service: Deactivated successfully. Mar 17 17:25:31.076276 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:25:31.080540 systemd-logind[1912]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:25:31.082541 systemd-logind[1912]: Removed session 2. Mar 17 17:25:31.102164 systemd[1]: Started sshd@2-172.31.28.142:22-139.178.68.195:47676.service - OpenSSH per-connection server daemon (139.178.68.195:47676). Mar 17 17:25:31.272410 amazon-ssm-agent[2106]: 2025-03-17 17:25:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:25:31.295209 sshd[2173]: Accepted publickey for core from 139.178.68.195 port 47676 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:31.298377 sshd-session[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:31.309687 systemd-logind[1912]: New session 3 of user core. Mar 17 17:25:31.314479 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:25:31.374848 amazon-ssm-agent[2106]: 2025-03-17 17:25:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2180) started Mar 17 17:25:31.444225 sshd[2184]: Connection closed by 139.178.68.195 port 47676 Mar 17 17:25:31.444032 sshd-session[2173]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:31.451835 systemd-logind[1912]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:25:31.454017 systemd[1]: sshd@2-172.31.28.142:22-139.178.68.195:47676.service: Deactivated successfully. Mar 17 17:25:31.460744 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:25:31.462917 systemd-logind[1912]: Removed session 3. Mar 17 17:25:31.477208 amazon-ssm-agent[2106]: 2025-03-17 17:25:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:25:31.492527 systemd[1]: Started sshd@3-172.31.28.142:22-139.178.68.195:47678.service - OpenSSH per-connection server daemon (139.178.68.195:47678). Mar 17 17:25:31.705209 sshd[2191]: Accepted publickey for core from 139.178.68.195 port 47678 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:31.707877 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:31.718275 systemd-logind[1912]: New session 4 of user core. Mar 17 17:25:31.728480 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:25:31.844100 ntpd[1908]: Listen normally on 7 eth0 [fe80::405:43ff:fe41:c3bd%2]:123 Mar 17 17:25:31.845029 ntpd[1908]: 17 Mar 17:25:31 ntpd[1908]: Listen normally on 7 eth0 [fe80::405:43ff:fe41:c3bd%2]:123 Mar 17 17:25:31.859223 sshd[2196]: Connection closed by 139.178.68.195 port 47678 Mar 17 17:25:31.860065 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:31.868136 systemd[1]: sshd@3-172.31.28.142:22-139.178.68.195:47678.service: Deactivated successfully. Mar 17 17:25:31.871451 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:25:31.873496 systemd-logind[1912]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:25:31.876334 systemd-logind[1912]: Removed session 4. Mar 17 17:25:31.899767 systemd[1]: Started sshd@4-172.31.28.142:22-139.178.68.195:47680.service - OpenSSH per-connection server daemon (139.178.68.195:47680). Mar 17 17:25:32.075129 kubelet[2164]: E0317 17:25:32.074918 2164 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:32.079978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:32.080346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:32.081058 systemd[1]: kubelet.service: Consumed 1.294s CPU time. Mar 17 17:25:32.093230 sshd[2203]: Accepted publickey for core from 139.178.68.195 port 47680 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:32.095677 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:32.102946 systemd-logind[1912]: New session 5 of user core. Mar 17 17:25:32.115533 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:25:32.232218 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:25:32.232847 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:32.250718 sudo[2208]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:32.274214 sshd[2207]: Connection closed by 139.178.68.195 port 47680 Mar 17 17:25:32.274565 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:32.280985 systemd[1]: sshd@4-172.31.28.142:22-139.178.68.195:47680.service: Deactivated successfully. Mar 17 17:25:32.284867 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:25:32.286141 systemd-logind[1912]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:25:32.288158 systemd-logind[1912]: Removed session 5. Mar 17 17:25:32.310736 systemd[1]: Started sshd@5-172.31.28.142:22-139.178.68.195:47688.service - OpenSSH per-connection server daemon (139.178.68.195:47688). Mar 17 17:25:32.504247 sshd[2213]: Accepted publickey for core from 139.178.68.195 port 47688 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:32.506680 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:32.514990 systemd-logind[1912]: New session 6 of user core. Mar 17 17:25:32.524474 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:25:32.629472 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:25:32.630095 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:32.636092 sudo[2217]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:32.645703 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:25:32.646331 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:32.669814 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:25:32.717524 augenrules[2239]: No rules Mar 17 17:25:32.719797 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:25:32.720383 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:25:32.725047 sudo[2216]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:32.748889 sshd[2215]: Connection closed by 139.178.68.195 port 47688 Mar 17 17:25:32.749695 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:32.754805 systemd[1]: sshd@5-172.31.28.142:22-139.178.68.195:47688.service: Deactivated successfully. Mar 17 17:25:32.758294 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:25:32.761117 systemd-logind[1912]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:25:32.763416 systemd-logind[1912]: Removed session 6. Mar 17 17:25:32.792701 systemd[1]: Started sshd@6-172.31.28.142:22-139.178.68.195:47702.service - OpenSSH per-connection server daemon (139.178.68.195:47702). Mar 17 17:25:32.971483 sshd[2247]: Accepted publickey for core from 139.178.68.195 port 47702 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:32.973819 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:32.980905 systemd-logind[1912]: New session 7 of user core. Mar 17 17:25:32.990409 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:25:33.093555 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:25:33.094954 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:34.333705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:34.334605 systemd[1]: kubelet.service: Consumed 1.294s CPU time. Mar 17 17:25:34.341730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:34.388042 systemd[1]: Reloading requested from client PID 2288 ('systemctl') (unit session-7.scope)... Mar 17 17:25:34.388259 systemd[1]: Reloading... Mar 17 17:25:34.627216 zram_generator::config[2331]: No configuration found. Mar 17 17:25:35.309529 systemd-resolved[1850]: Clock change detected. Flushing caches. Mar 17 17:25:35.315421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:35.476738 systemd[1]: Reloading finished in 622 ms. Mar 17 17:25:35.559611 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:25:35.559784 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:25:35.560657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:35.569056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:35.862733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:35.870967 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:25:35.944284 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:35.944284 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:25:35.944284 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:35.946159 kubelet[2392]: I0317 17:25:35.946081 2392 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:25:36.791976 kubelet[2392]: I0317 17:25:36.791911 2392 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:25:36.791976 kubelet[2392]: I0317 17:25:36.791961 2392 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:25:36.792372 kubelet[2392]: I0317 17:25:36.792342 2392 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:25:36.824053 kubelet[2392]: I0317 17:25:36.823798 2392 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:25:36.847340 kubelet[2392]: I0317 17:25:36.847289 2392 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:25:36.847863 kubelet[2392]: I0317 17:25:36.847799 2392 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:25:36.848153 kubelet[2392]: I0317 17:25:36.847864 2392 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.28.142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:25:36.848337 kubelet[2392]: I0317 17:25:36.848182 2392 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:25:36.848337 kubelet[2392]: I0317 17:25:36.848204 2392 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:25:36.848510 kubelet[2392]: I0317 17:25:36.848472 2392 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:36.850084 kubelet[2392]: I0317 17:25:36.850032 2392 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:25:36.850084 kubelet[2392]: I0317 17:25:36.850072 2392 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:25:36.850323 kubelet[2392]: I0317 17:25:36.850185 2392 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:25:36.850323 kubelet[2392]: I0317 17:25:36.850259 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:25:36.851340 kubelet[2392]: E0317 17:25:36.851028 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:36.851340 kubelet[2392]: E0317 17:25:36.851087 2392 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:36.852577 kubelet[2392]: I0317 17:25:36.852255 2392 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:25:36.852875 kubelet[2392]: I0317 17:25:36.852825 2392 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:25:36.852958 kubelet[2392]: W0317 17:25:36.852919 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:25:36.854093 kubelet[2392]: I0317 17:25:36.854050 2392 server.go:1264] "Started kubelet" Mar 17 17:25:36.858762 kubelet[2392]: I0317 17:25:36.858669 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:25:36.860469 kubelet[2392]: I0317 17:25:36.859419 2392 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:25:36.860469 kubelet[2392]: I0317 17:25:36.859583 2392 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:25:36.860469 kubelet[2392]: I0317 17:25:36.860092 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:25:36.861594 kubelet[2392]: I0317 17:25:36.861563 2392 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:25:36.870040 kubelet[2392]: I0317 17:25:36.869982 2392 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:25:36.870418 kubelet[2392]: I0317 17:25:36.870392 2392 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:25:36.872470 kubelet[2392]: I0317 17:25:36.872418 2392 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:25:36.875304 kubelet[2392]: E0317 17:25:36.875260 2392 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:25:36.875847 kubelet[2392]: I0317 17:25:36.875808 2392 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:25:36.876179 kubelet[2392]: I0317 17:25:36.876145 2392 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:25:36.878460 kubelet[2392]: I0317 17:25:36.878404 2392 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:25:36.907258 kubelet[2392]: I0317 17:25:36.906561 2392 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:25:36.907258 kubelet[2392]: I0317 17:25:36.906593 2392 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:25:36.907258 kubelet[2392]: I0317 17:25:36.906628 2392 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:36.909622 kubelet[2392]: I0317 17:25:36.909579 2392 policy_none.go:49] "None policy: Start" Mar 17 17:25:36.911559 kubelet[2392]: I0317 17:25:36.910948 2392 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:25:36.911559 kubelet[2392]: I0317 17:25:36.910995 2392 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:25:36.924335 kubelet[2392]: W0317 17:25:36.922198 2392 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:25:36.924335 kubelet[2392]: E0317 17:25:36.922253 2392 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:25:36.924335 kubelet[2392]: E0317 17:25:36.922336 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.28.142\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 17:25:36.924335 kubelet[2392]: W0317 17:25:36.922568 2392 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.28.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:25:36.924335 kubelet[2392]: E0317 17:25:36.922595 2392 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:25:36.924335 kubelet[2392]: W0317 17:25:36.922655 2392 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:25:36.924335 kubelet[2392]: E0317 17:25:36.922678 2392 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:25:36.924068 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:25:36.925215 kubelet[2392]: E0317 17:25:36.922833 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.142.182da7146261a30f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.142,UID:172.31.28.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.28.142,},FirstTimestamp:2025-03-17 17:25:36.854000399 +0000 UTC m=+0.976681830,LastTimestamp:2025-03-17 17:25:36.854000399 +0000 UTC m=+0.976681830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.142,}" Mar 17 17:25:36.939549 kubelet[2392]: E0317 17:25:36.939385 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.142.182da71463a5aa8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.142,UID:172.31.28.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.28.142,},FirstTimestamp:2025-03-17 17:25:36.875235983 +0000 UTC m=+0.997917462,LastTimestamp:2025-03-17 17:25:36.875235983 +0000 UTC m=+0.997917462,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.142,}" Mar 17 17:25:36.941742 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:25:36.952314 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:25:36.958848 kubelet[2392]: E0317 17:25:36.958180 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.142.182da7146564d6e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.142,UID:172.31.28.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.28.142 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.28.142,},FirstTimestamp:2025-03-17 17:25:36.904541927 +0000 UTC m=+1.027223358,LastTimestamp:2025-03-17 17:25:36.904541927 +0000 UTC m=+1.027223358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.142,}" Mar 17 17:25:36.962828 kubelet[2392]: E0317 17:25:36.962528 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.142.182da71465650867 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.142,UID:172.31.28.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.28.142 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.28.142,},FirstTimestamp:2025-03-17 17:25:36.904554599 +0000 UTC m=+1.027236030,LastTimestamp:2025-03-17 17:25:36.904554599 +0000 UTC m=+1.027236030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.142,}" Mar 17 17:25:36.967157 kubelet[2392]: I0317 17:25:36.966535 2392 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:25:36.967157 kubelet[2392]: I0317 17:25:36.966859 2392 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:25:36.967157 kubelet[2392]: I0317 17:25:36.967052 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:25:36.976029 kubelet[2392]: I0317 17:25:36.974485 2392 kubelet_node_status.go:73] "Attempting to register node" node="172.31.28.142" Mar 17 17:25:36.977727 kubelet[2392]: E0317 17:25:36.977577 2392 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.142\" not found" Mar 17 17:25:36.992869 kubelet[2392]: E0317 17:25:36.992462 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.142.182da71465651cd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.142,UID:172.31.28.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.28.142 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.28.142,},FirstTimestamp:2025-03-17 17:25:36.904559831 +0000 UTC m=+1.027241262,LastTimestamp:2025-03-17 17:25:36.904559831 +0000 UTC m=+1.027241262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.142,}" Mar 17 17:25:36.993083 kubelet[2392]: E0317 17:25:36.992893 2392 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.28.142" Mar 17 17:25:37.015354 kubelet[2392]: E0317 17:25:37.015218 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.142.182da714697144f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.142,UID:172.31.28.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:172.31.28.142,},FirstTimestamp:2025-03-17 17:25:36.972465396 +0000 UTC m=+1.095146827,LastTimestamp:2025-03-17 17:25:36.972465396 +0000 UTC m=+1.095146827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.142,}" Mar 17 17:25:37.026690 kubelet[2392]: I0317 17:25:37.026616 2392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:25:37.031653 kubelet[2392]: I0317 17:25:37.031612 2392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:25:37.031999 kubelet[2392]: I0317 17:25:37.031891 2392 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:25:37.031999 kubelet[2392]: I0317 17:25:37.031951 2392 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:25:37.032206 kubelet[2392]: E0317 17:25:37.032158 2392 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:25:37.135375 kubelet[2392]: E0317 17:25:37.134653 2392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.28.142\" not found" node="172.31.28.142" Mar 17 17:25:37.194135 kubelet[2392]: I0317 17:25:37.194091 2392 kubelet_node_status.go:73] "Attempting to register node" node="172.31.28.142" Mar 17 17:25:37.217389 kubelet[2392]: I0317 17:25:37.217354 2392 kubelet_node_status.go:76] "Successfully registered node" node="172.31.28.142" Mar 17 17:25:37.254459 kubelet[2392]: E0317 17:25:37.254381 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.354741 kubelet[2392]: E0317 17:25:37.354700 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.455362 kubelet[2392]: E0317 17:25:37.455314 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.556006 kubelet[2392]: E0317 17:25:37.555972 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.656671 kubelet[2392]: E0317 17:25:37.656613 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.757253 kubelet[2392]: E0317 17:25:37.757160 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.795878 kubelet[2392]: I0317 17:25:37.795782 2392 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:25:37.796040 kubelet[2392]: W0317 17:25:37.795996 2392 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:25:37.851233 kubelet[2392]: E0317 17:25:37.851169 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:37.857425 kubelet[2392]: E0317 17:25:37.857391 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:37.904945 sudo[2250]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:37.927758 sshd[2249]: Connection closed by 139.178.68.195 port 47702 Mar 17 17:25:37.928601 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:37.934038 systemd-logind[1912]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:25:37.937089 systemd[1]: sshd@6-172.31.28.142:22-139.178.68.195:47702.service: Deactivated successfully. Mar 17 17:25:37.940896 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:25:37.942934 systemd-logind[1912]: Removed session 7. Mar 17 17:25:37.957762 kubelet[2392]: E0317 17:25:37.957694 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:38.058381 kubelet[2392]: E0317 17:25:38.058248 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:38.158953 kubelet[2392]: E0317 17:25:38.158911 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:38.259959 kubelet[2392]: E0317 17:25:38.259899 2392 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.142\" not found" Mar 17 17:25:38.361476 kubelet[2392]: I0317 17:25:38.361092 2392 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:25:38.361924 containerd[1939]: time="2025-03-17T17:25:38.361564007Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:25:38.363026 kubelet[2392]: I0317 17:25:38.362176 2392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:25:38.851623 kubelet[2392]: E0317 17:25:38.851520 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:38.851623 kubelet[2392]: I0317 17:25:38.851528 2392 apiserver.go:52] "Watching apiserver" Mar 17 17:25:38.866468 kubelet[2392]: I0317 17:25:38.866042 2392 topology_manager.go:215] "Topology Admit Handler" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" podNamespace="calico-system" podName="csi-node-driver-xjw9g" Mar 17 17:25:38.866468 kubelet[2392]: I0317 17:25:38.866162 2392 topology_manager.go:215] "Topology Admit Handler" podUID="7b45ae6c-5806-46cb-9726-ab226b3e4be2" podNamespace="kube-system" podName="kube-proxy-n5lzr" Mar 17 17:25:38.866468 kubelet[2392]: I0317 17:25:38.866266 2392 topology_manager.go:215] "Topology Admit Handler" podUID="bc23c539-cdc8-497e-84dd-e43f83128719" podNamespace="calico-system" podName="calico-node-hftv6" Mar 17 17:25:38.866702 kubelet[2392]: E0317 17:25:38.866637 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:38.871637 kubelet[2392]: I0317 17:25:38.871391 2392 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:38.879797 systemd[1]: Created slice kubepods-besteffort-podbc23c539_cdc8_497e_84dd_e43f83128719.slice - libcontainer container kubepods-besteffort-podbc23c539_cdc8_497e_84dd_e43f83128719.slice. Mar 17 17:25:38.885902 kubelet[2392]: I0317 17:25:38.885852 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc23c539-cdc8-497e-84dd-e43f83128719-tigera-ca-bundle\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.886110 kubelet[2392]: I0317 17:25:38.886066 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-flexvol-driver-host\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.886274 kubelet[2392]: I0317 17:25:38.886247 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-cni-bin-dir\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.887563 kubelet[2392]: I0317 17:25:38.887523 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-cni-log-dir\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.887780 kubelet[2392]: I0317 17:25:38.887752 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqn9p\" (UniqueName: \"kubernetes.io/projected/bc23c539-cdc8-497e-84dd-e43f83128719-kube-api-access-bqn9p\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.887939 kubelet[2392]: I0317 17:25:38.887913 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a0fa576-dbd1-4a42-9dcd-41e781c08c82-varrun\") pod \"csi-node-driver-xjw9g\" (UID: \"9a0fa576-dbd1-4a42-9dcd-41e781c08c82\") " pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:38.888073 kubelet[2392]: I0317 17:25:38.888048 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a0fa576-dbd1-4a42-9dcd-41e781c08c82-kubelet-dir\") pod \"csi-node-driver-xjw9g\" (UID: \"9a0fa576-dbd1-4a42-9dcd-41e781c08c82\") " pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:38.888215 kubelet[2392]: I0317 17:25:38.888188 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpghq\" (UniqueName: \"kubernetes.io/projected/7b45ae6c-5806-46cb-9726-ab226b3e4be2-kube-api-access-gpghq\") pod \"kube-proxy-n5lzr\" (UID: \"7b45ae6c-5806-46cb-9726-ab226b3e4be2\") " pod="kube-system/kube-proxy-n5lzr" Mar 17 17:25:38.888346 kubelet[2392]: I0317 17:25:38.888320 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-policysync\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.888512 kubelet[2392]: I0317 17:25:38.888484 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-var-run-calico\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.888716 kubelet[2392]: I0317 17:25:38.888657 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a0fa576-dbd1-4a42-9dcd-41e781c08c82-registration-dir\") pod \"csi-node-driver-xjw9g\" (UID: \"9a0fa576-dbd1-4a42-9dcd-41e781c08c82\") " pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:38.889317 kubelet[2392]: I0317 17:25:38.889284 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b45ae6c-5806-46cb-9726-ab226b3e4be2-lib-modules\") pod \"kube-proxy-n5lzr\" (UID: \"7b45ae6c-5806-46cb-9726-ab226b3e4be2\") " pod="kube-system/kube-proxy-n5lzr" Mar 17 17:25:38.890153 kubelet[2392]: I0317 17:25:38.889503 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-xtables-lock\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.890153 kubelet[2392]: I0317 17:25:38.889602 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc23c539-cdc8-497e-84dd-e43f83128719-node-certs\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.890153 kubelet[2392]: I0317 17:25:38.889646 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-cni-net-dir\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.890153 kubelet[2392]: I0317 17:25:38.889682 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-var-lib-calico\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.890153 kubelet[2392]: I0317 17:25:38.889731 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a0fa576-dbd1-4a42-9dcd-41e781c08c82-socket-dir\") pod \"csi-node-driver-xjw9g\" (UID: \"9a0fa576-dbd1-4a42-9dcd-41e781c08c82\") " pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:38.890546 kubelet[2392]: I0317 17:25:38.889802 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgb4\" (UniqueName: \"kubernetes.io/projected/9a0fa576-dbd1-4a42-9dcd-41e781c08c82-kube-api-access-ddgb4\") pod \"csi-node-driver-xjw9g\" (UID: \"9a0fa576-dbd1-4a42-9dcd-41e781c08c82\") " pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:38.890546 kubelet[2392]: I0317 17:25:38.889842 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b45ae6c-5806-46cb-9726-ab226b3e4be2-kube-proxy\") pod \"kube-proxy-n5lzr\" (UID: \"7b45ae6c-5806-46cb-9726-ab226b3e4be2\") " pod="kube-system/kube-proxy-n5lzr" Mar 17 17:25:38.890546 kubelet[2392]: I0317 17:25:38.889879 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b45ae6c-5806-46cb-9726-ab226b3e4be2-xtables-lock\") pod \"kube-proxy-n5lzr\" (UID: \"7b45ae6c-5806-46cb-9726-ab226b3e4be2\") " pod="kube-system/kube-proxy-n5lzr" Mar 17 17:25:38.890546 kubelet[2392]: I0317 17:25:38.889931 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc23c539-cdc8-497e-84dd-e43f83128719-lib-modules\") pod \"calico-node-hftv6\" (UID: \"bc23c539-cdc8-497e-84dd-e43f83128719\") " pod="calico-system/calico-node-hftv6" Mar 17 17:25:38.897298 systemd[1]: Created slice kubepods-besteffort-pod7b45ae6c_5806_46cb_9726_ab226b3e4be2.slice - libcontainer container kubepods-besteffort-pod7b45ae6c_5806_46cb_9726_ab226b3e4be2.slice. Mar 17 17:25:38.995853 kubelet[2392]: E0317 17:25:38.995594 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:38.995853 kubelet[2392]: W0317 17:25:38.995633 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:38.995853 kubelet[2392]: E0317 17:25:38.995687 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:38.996541 kubelet[2392]: E0317 17:25:38.996390 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:38.996541 kubelet[2392]: W0317 17:25:38.996418 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:38.996541 kubelet[2392]: E0317 17:25:38.996475 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:38.997360 kubelet[2392]: E0317 17:25:38.997146 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:38.997360 kubelet[2392]: W0317 17:25:38.997197 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:38.997360 kubelet[2392]: E0317 17:25:38.997225 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:38.998087 kubelet[2392]: E0317 17:25:38.997927 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:38.998087 kubelet[2392]: W0317 17:25:38.997953 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:38.998087 kubelet[2392]: E0317 17:25:38.998005 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:38.998818 kubelet[2392]: E0317 17:25:38.998688 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:38.998818 kubelet[2392]: W0317 17:25:38.998742 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:38.998818 kubelet[2392]: E0317 17:25:38.998770 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:38.999546 kubelet[2392]: E0317 17:25:38.999412 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:38.999546 kubelet[2392]: W0317 17:25:38.999478 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:38.999546 kubelet[2392]: E0317 17:25:38.999506 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.000289 kubelet[2392]: E0317 17:25:39.000159 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.000289 kubelet[2392]: W0317 17:25:39.000211 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.000289 kubelet[2392]: E0317 17:25:39.000242 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.001122 kubelet[2392]: E0317 17:25:39.000954 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.001122 kubelet[2392]: W0317 17:25:39.000981 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.001122 kubelet[2392]: E0317 17:25:39.001007 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.001813 kubelet[2392]: E0317 17:25:39.001689 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.001813 kubelet[2392]: W0317 17:25:39.001738 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.002390 kubelet[2392]: E0317 17:25:39.001784 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.002390 kubelet[2392]: E0317 17:25:39.002246 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.002660 kubelet[2392]: W0317 17:25:39.002362 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.002660 kubelet[2392]: E0317 17:25:39.002614 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.003166 kubelet[2392]: E0317 17:25:39.003096 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.003166 kubelet[2392]: W0317 17:25:39.003138 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.003424 kubelet[2392]: E0317 17:25:39.003400 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.003957 kubelet[2392]: E0317 17:25:39.003925 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.004139 kubelet[2392]: W0317 17:25:39.004066 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.004288 kubelet[2392]: E0317 17:25:39.004216 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.004932 kubelet[2392]: E0317 17:25:39.004848 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.004932 kubelet[2392]: W0317 17:25:39.004877 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.005264 kubelet[2392]: E0317 17:25:39.005125 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.005682 kubelet[2392]: E0317 17:25:39.005559 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.005682 kubelet[2392]: W0317 17:25:39.005607 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.005682 kubelet[2392]: E0317 17:25:39.005647 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.006423 kubelet[2392]: E0317 17:25:39.006291 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.006423 kubelet[2392]: W0317 17:25:39.006311 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.006423 kubelet[2392]: E0317 17:25:39.006361 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.009228 kubelet[2392]: E0317 17:25:39.008842 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.009228 kubelet[2392]: W0317 17:25:39.009162 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.009494 kubelet[2392]: E0317 17:25:39.009230 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.010416 kubelet[2392]: E0317 17:25:39.010381 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.010535 kubelet[2392]: W0317 17:25:39.010413 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.010535 kubelet[2392]: E0317 17:25:39.010468 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.011092 kubelet[2392]: E0317 17:25:39.011021 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.011092 kubelet[2392]: W0317 17:25:39.011052 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.011092 kubelet[2392]: E0317 17:25:39.011079 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.027576 kubelet[2392]: E0317 17:25:39.025751 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.027576 kubelet[2392]: W0317 17:25:39.025790 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.027576 kubelet[2392]: E0317 17:25:39.025845 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.038843 kubelet[2392]: E0317 17:25:39.038809 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.039055 kubelet[2392]: W0317 17:25:39.039028 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.039318 kubelet[2392]: E0317 17:25:39.039166 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.047694 kubelet[2392]: E0317 17:25:39.047634 2392 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:25:39.047694 kubelet[2392]: W0317 17:25:39.047671 2392 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:25:39.048572 kubelet[2392]: E0317 17:25:39.047703 2392 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:25:39.195124 containerd[1939]: time="2025-03-17T17:25:39.195029807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hftv6,Uid:bc23c539-cdc8-497e-84dd-e43f83128719,Namespace:calico-system,Attempt:0,}" Mar 17 17:25:39.205802 containerd[1939]: time="2025-03-17T17:25:39.205312079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5lzr,Uid:7b45ae6c-5806-46cb-9726-ab226b3e4be2,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:39.765508 containerd[1939]: time="2025-03-17T17:25:39.764915810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:39.768421 containerd[1939]: time="2025-03-17T17:25:39.768355370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:25:39.770369 containerd[1939]: time="2025-03-17T17:25:39.770315126Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:39.772223 containerd[1939]: time="2025-03-17T17:25:39.771883154Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:39.772223 containerd[1939]: time="2025-03-17T17:25:39.772172786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:25:39.779515 containerd[1939]: time="2025-03-17T17:25:39.779452406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:39.781601 containerd[1939]: time="2025-03-17T17:25:39.781546610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.399287ms" Mar 17 17:25:39.784627 containerd[1939]: time="2025-03-17T17:25:39.784573262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.154035ms" Mar 17 17:25:39.852562 kubelet[2392]: E0317 17:25:39.852505 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:40.003457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22100748.mount: Deactivated successfully. Mar 17 17:25:40.033579 kubelet[2392]: E0317 17:25:40.033363 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:40.125559 containerd[1939]: time="2025-03-17T17:25:40.125224259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:40.125685 containerd[1939]: time="2025-03-17T17:25:40.125613539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:40.125761 containerd[1939]: time="2025-03-17T17:25:40.125704955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:40.126204 containerd[1939]: time="2025-03-17T17:25:40.126037091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:40.132239 containerd[1939]: time="2025-03-17T17:25:40.132062963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:40.132239 containerd[1939]: time="2025-03-17T17:25:40.132183659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:40.132964 containerd[1939]: time="2025-03-17T17:25:40.132735695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:40.147667 containerd[1939]: time="2025-03-17T17:25:40.147524448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:40.277757 systemd[1]: Started cri-containerd-a7ff313d753d537b9e1db4bc03cb32a8a763ff699edea63c8e5b591ff8e9adb2.scope - libcontainer container a7ff313d753d537b9e1db4bc03cb32a8a763ff699edea63c8e5b591ff8e9adb2. Mar 17 17:25:40.294109 systemd[1]: Started cri-containerd-cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788.scope - libcontainer container cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788. Mar 17 17:25:40.352618 containerd[1939]: time="2025-03-17T17:25:40.352494553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hftv6,Uid:bc23c539-cdc8-497e-84dd-e43f83128719,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\"" Mar 17 17:25:40.359061 containerd[1939]: time="2025-03-17T17:25:40.358871329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:25:40.361698 containerd[1939]: time="2025-03-17T17:25:40.361587445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5lzr,Uid:7b45ae6c-5806-46cb-9726-ab226b3e4be2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7ff313d753d537b9e1db4bc03cb32a8a763ff699edea63c8e5b591ff8e9adb2\"" Mar 17 17:25:40.852967 kubelet[2392]: E0317 17:25:40.852902 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:41.602691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2054810330.mount: Deactivated successfully. Mar 17 17:25:41.853616 kubelet[2392]: E0317 17:25:41.853487 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:41.913074 containerd[1939]: time="2025-03-17T17:25:41.913015912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:41.915495 containerd[1939]: time="2025-03-17T17:25:41.915386584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6490047" Mar 17 17:25:41.916972 containerd[1939]: time="2025-03-17T17:25:41.916902316Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:41.920580 containerd[1939]: time="2025-03-17T17:25:41.920485648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:41.922020 containerd[1939]: time="2025-03-17T17:25:41.921763696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.562828287s" Mar 17 17:25:41.922020 containerd[1939]: time="2025-03-17T17:25:41.921817624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 17 17:25:41.925304 containerd[1939]: time="2025-03-17T17:25:41.924989668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:25:41.928300 containerd[1939]: time="2025-03-17T17:25:41.928017436Z" level=info msg="CreateContainer within sandbox \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:25:41.951140 containerd[1939]: time="2025-03-17T17:25:41.951083692Z" level=info msg="CreateContainer within sandbox \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58\"" Mar 17 17:25:41.952475 containerd[1939]: time="2025-03-17T17:25:41.952331464Z" level=info msg="StartContainer for \"b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58\"" Mar 17 17:25:42.006747 systemd[1]: Started cri-containerd-b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58.scope - libcontainer container b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58. Mar 17 17:25:42.032937 kubelet[2392]: E0317 17:25:42.032505 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:42.070977 containerd[1939]: time="2025-03-17T17:25:42.070882429Z" level=info msg="StartContainer for \"b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58\" returns successfully" Mar 17 17:25:42.092752 systemd[1]: cri-containerd-b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58.scope: Deactivated successfully. Mar 17 17:25:42.182963 containerd[1939]: time="2025-03-17T17:25:42.182709686Z" level=info msg="shim disconnected" id=b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58 namespace=k8s.io Mar 17 17:25:42.182963 containerd[1939]: time="2025-03-17T17:25:42.182822438Z" level=warning msg="cleaning up after shim disconnected" id=b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58 namespace=k8s.io Mar 17 17:25:42.182963 containerd[1939]: time="2025-03-17T17:25:42.182842466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:42.557943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6efbc0b719c634feb5891a402dd7b33acdb484c4a236855aafa2a02b0381a58-rootfs.mount: Deactivated successfully. Mar 17 17:25:42.855735 kubelet[2392]: E0317 17:25:42.855564 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:43.217911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811690537.mount: Deactivated successfully. Mar 17 17:25:43.690563 containerd[1939]: time="2025-03-17T17:25:43.690500681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:43.691939 containerd[1939]: time="2025-03-17T17:25:43.691874417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771848" Mar 17 17:25:43.692966 containerd[1939]: time="2025-03-17T17:25:43.692878841Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:43.696500 containerd[1939]: time="2025-03-17T17:25:43.696387305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:43.698105 containerd[1939]: time="2025-03-17T17:25:43.697923005Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.772877009s" Mar 17 17:25:43.698105 containerd[1939]: time="2025-03-17T17:25:43.697972553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:25:43.700775 containerd[1939]: time="2025-03-17T17:25:43.700727933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:25:43.702260 containerd[1939]: time="2025-03-17T17:25:43.702188921Z" level=info msg="CreateContainer within sandbox \"a7ff313d753d537b9e1db4bc03cb32a8a763ff699edea63c8e5b591ff8e9adb2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:25:43.738529 containerd[1939]: time="2025-03-17T17:25:43.738457865Z" level=info msg="CreateContainer within sandbox \"a7ff313d753d537b9e1db4bc03cb32a8a763ff699edea63c8e5b591ff8e9adb2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c781eb31f3af3b7a861730139ef77b3daeaa2d7d300988b58ab27f89c69ab674\"" Mar 17 17:25:43.739542 containerd[1939]: time="2025-03-17T17:25:43.739410005Z" level=info msg="StartContainer for \"c781eb31f3af3b7a861730139ef77b3daeaa2d7d300988b58ab27f89c69ab674\"" Mar 17 17:25:43.799777 systemd[1]: Started cri-containerd-c781eb31f3af3b7a861730139ef77b3daeaa2d7d300988b58ab27f89c69ab674.scope - libcontainer container c781eb31f3af3b7a861730139ef77b3daeaa2d7d300988b58ab27f89c69ab674. Mar 17 17:25:43.855815 kubelet[2392]: E0317 17:25:43.855721 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:43.857544 containerd[1939]: time="2025-03-17T17:25:43.857475030Z" level=info msg="StartContainer for \"c781eb31f3af3b7a861730139ef77b3daeaa2d7d300988b58ab27f89c69ab674\" returns successfully" Mar 17 17:25:44.033248 kubelet[2392]: E0317 17:25:44.032683 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:44.856126 kubelet[2392]: E0317 17:25:44.856066 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:45.856395 kubelet[2392]: E0317 17:25:45.856327 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:46.033150 kubelet[2392]: E0317 17:25:46.033089 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:46.857004 kubelet[2392]: E0317 17:25:46.856948 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:47.263099 containerd[1939]: time="2025-03-17T17:25:47.263044339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:47.265079 containerd[1939]: time="2025-03-17T17:25:47.265008679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 17 17:25:47.265354 containerd[1939]: time="2025-03-17T17:25:47.265212979Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:47.268868 containerd[1939]: time="2025-03-17T17:25:47.268778443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:47.273220 containerd[1939]: time="2025-03-17T17:25:47.273152779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 3.571816362s" Mar 17 17:25:47.275470 containerd[1939]: time="2025-03-17T17:25:47.273386443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 17 17:25:47.287803 containerd[1939]: time="2025-03-17T17:25:47.287732131Z" level=info msg="CreateContainer within sandbox \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:25:47.310682 containerd[1939]: time="2025-03-17T17:25:47.310627459Z" level=info msg="CreateContainer within sandbox \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93\"" Mar 17 17:25:47.311546 containerd[1939]: time="2025-03-17T17:25:47.311500663Z" level=info msg="StartContainer for \"820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93\"" Mar 17 17:25:47.366740 systemd[1]: Started cri-containerd-820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93.scope - libcontainer container 820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93. Mar 17 17:25:47.429767 containerd[1939]: time="2025-03-17T17:25:47.429700580Z" level=info msg="StartContainer for \"820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93\" returns successfully" Mar 17 17:25:47.858295 kubelet[2392]: E0317 17:25:47.858246 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:48.032498 kubelet[2392]: E0317 17:25:48.032392 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:48.141305 kubelet[2392]: I0317 17:25:48.141097 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n5lzr" podStartSLOduration=7.806931055 podStartE2EDuration="11.141075583s" podCreationTimestamp="2025-03-17 17:25:37 +0000 UTC" firstStartedPulling="2025-03-17 17:25:40.365563081 +0000 UTC m=+4.488244512" lastFinishedPulling="2025-03-17 17:25:43.699707525 +0000 UTC m=+7.822389040" observedRunningTime="2025-03-17 17:25:44.089565231 +0000 UTC m=+8.212246650" watchObservedRunningTime="2025-03-17 17:25:48.141075583 +0000 UTC m=+12.263757014" Mar 17 17:25:48.383847 containerd[1939]: time="2025-03-17T17:25:48.383721284Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:25:48.387548 systemd[1]: cri-containerd-820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93.scope: Deactivated successfully. Mar 17 17:25:48.422613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93-rootfs.mount: Deactivated successfully. Mar 17 17:25:48.455477 kubelet[2392]: I0317 17:25:48.455390 2392 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:25:48.859527 kubelet[2392]: E0317 17:25:48.859335 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:49.838363 containerd[1939]: time="2025-03-17T17:25:49.838268664Z" level=info msg="shim disconnected" id=820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93 namespace=k8s.io Mar 17 17:25:49.838363 containerd[1939]: time="2025-03-17T17:25:49.838359828Z" level=warning msg="cleaning up after shim disconnected" id=820eecbc244110b849af057a95fbbd6b341fced7b4165a81734a01ac31384f93 namespace=k8s.io Mar 17 17:25:49.839005 containerd[1939]: time="2025-03-17T17:25:49.838380756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:49.860011 kubelet[2392]: E0317 17:25:49.859943 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:50.043159 systemd[1]: Created slice kubepods-besteffort-pod9a0fa576_dbd1_4a42_9dcd_41e781c08c82.slice - libcontainer container kubepods-besteffort-pod9a0fa576_dbd1_4a42_9dcd_41e781c08c82.slice. Mar 17 17:25:50.047305 containerd[1939]: time="2025-03-17T17:25:50.047231649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:0,}" Mar 17 17:25:50.094481 containerd[1939]: time="2025-03-17T17:25:50.093335217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:25:50.160168 containerd[1939]: time="2025-03-17T17:25:50.160095633Z" level=error msg="Failed to destroy network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:50.161223 containerd[1939]: time="2025-03-17T17:25:50.161156985Z" level=error msg="encountered an error cleaning up failed sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:50.161332 containerd[1939]: time="2025-03-17T17:25:50.161269677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:50.163501 kubelet[2392]: E0317 17:25:50.161662 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:50.163501 kubelet[2392]: E0317 17:25:50.161756 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:50.163501 kubelet[2392]: E0317 17:25:50.161790 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:50.163792 kubelet[2392]: E0317 17:25:50.161857 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:50.164590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a-shm.mount: Deactivated successfully. Mar 17 17:25:50.860180 kubelet[2392]: E0317 17:25:50.860105 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:50.911830 kubelet[2392]: I0317 17:25:50.911197 2392 topology_manager.go:215] "Topology Admit Handler" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" podNamespace="default" podName="nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:50.921729 systemd[1]: Created slice kubepods-besteffort-pod45d1b902_7741_4329_b5f7_6883992bfb94.slice - libcontainer container kubepods-besteffort-pod45d1b902_7741_4329_b5f7_6883992bfb94.slice. Mar 17 17:25:51.024402 kubelet[2392]: I0317 17:25:51.024332 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftpd\" (UniqueName: \"kubernetes.io/projected/45d1b902-7741-4329-b5f7-6883992bfb94-kube-api-access-tftpd\") pod \"nginx-deployment-85f456d6dd-l46w9\" (UID: \"45d1b902-7741-4329-b5f7-6883992bfb94\") " pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:51.093601 kubelet[2392]: I0317 17:25:51.093459 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a" Mar 17 17:25:51.094624 containerd[1939]: time="2025-03-17T17:25:51.094309822Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:51.094624 containerd[1939]: time="2025-03-17T17:25:51.094616734Z" level=info msg="Ensure that sandbox 6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a in task-service has been cleanup successfully" Mar 17 17:25:51.095827 containerd[1939]: time="2025-03-17T17:25:51.095200474Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:51.095827 containerd[1939]: time="2025-03-17T17:25:51.095288830Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:51.098417 containerd[1939]: time="2025-03-17T17:25:51.096288118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:1,}" Mar 17 17:25:51.099532 systemd[1]: run-netns-cni\x2d0672c0b1\x2d5d91\x2d2026\x2d2433\x2d2349f9c17a17.mount: Deactivated successfully. Mar 17 17:25:51.217139 containerd[1939]: time="2025-03-17T17:25:51.216926831Z" level=error msg="Failed to destroy network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.219470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c-shm.mount: Deactivated successfully. Mar 17 17:25:51.220816 containerd[1939]: time="2025-03-17T17:25:51.220525451Z" level=error msg="encountered an error cleaning up failed sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.220816 containerd[1939]: time="2025-03-17T17:25:51.220751975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.221915 kubelet[2392]: E0317 17:25:51.221703 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.221915 kubelet[2392]: E0317 17:25:51.221785 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:51.221915 kubelet[2392]: E0317 17:25:51.221820 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:51.222393 kubelet[2392]: E0317 17:25:51.221897 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:51.228259 containerd[1939]: time="2025-03-17T17:25:51.227741903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:0,}" Mar 17 17:25:51.337814 containerd[1939]: time="2025-03-17T17:25:51.337740251Z" level=error msg="Failed to destroy network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.339260 containerd[1939]: time="2025-03-17T17:25:51.338336807Z" level=error msg="encountered an error cleaning up failed sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.339260 containerd[1939]: time="2025-03-17T17:25:51.338460875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.339571 kubelet[2392]: E0317 17:25:51.338738 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:51.339571 kubelet[2392]: E0317 17:25:51.338816 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:51.339571 kubelet[2392]: E0317 17:25:51.338849 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:51.339751 kubelet[2392]: E0317 17:25:51.338911 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-l46w9" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" Mar 17 17:25:51.860774 kubelet[2392]: E0317 17:25:51.860700 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:52.115293 kubelet[2392]: I0317 17:25:52.113056 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c" Mar 17 17:25:52.115457 containerd[1939]: time="2025-03-17T17:25:52.114657671Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:25:52.115457 containerd[1939]: time="2025-03-17T17:25:52.114949907Z" level=info msg="Ensure that sandbox a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c in task-service has been cleanup successfully" Mar 17 17:25:52.119126 systemd[1]: run-netns-cni\x2dd81ac7c1\x2d3263\x2ddb60\x2dd9b5\x2d650c7aac7e9c.mount: Deactivated successfully. Mar 17 17:25:52.122410 containerd[1939]: time="2025-03-17T17:25:52.121655351Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:25:52.122410 containerd[1939]: time="2025-03-17T17:25:52.121713203Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:25:52.123137 kubelet[2392]: I0317 17:25:52.122935 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508" Mar 17 17:25:52.125470 containerd[1939]: time="2025-03-17T17:25:52.125007851Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:25:52.125470 containerd[1939]: time="2025-03-17T17:25:52.125361635Z" level=info msg="Ensure that sandbox e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508 in task-service has been cleanup successfully" Mar 17 17:25:52.128465 containerd[1939]: time="2025-03-17T17:25:52.126033695Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:52.128465 containerd[1939]: time="2025-03-17T17:25:52.126206795Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:52.128465 containerd[1939]: time="2025-03-17T17:25:52.126229523Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:52.129329 containerd[1939]: time="2025-03-17T17:25:52.129272987Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:25:52.129329 containerd[1939]: time="2025-03-17T17:25:52.129322811Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:25:52.130256 containerd[1939]: time="2025-03-17T17:25:52.130053887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:2,}" Mar 17 17:25:52.130696 systemd[1]: run-netns-cni\x2d0c4ff734\x2d7e81\x2d2d39\x2d612e\x2dd00e5a29a799.mount: Deactivated successfully. Mar 17 17:25:52.135325 containerd[1939]: time="2025-03-17T17:25:52.134603843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:1,}" Mar 17 17:25:52.352779 containerd[1939]: time="2025-03-17T17:25:52.352138668Z" level=error msg="Failed to destroy network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.354808 containerd[1939]: time="2025-03-17T17:25:52.354731988Z" level=error msg="encountered an error cleaning up failed sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.354970 containerd[1939]: time="2025-03-17T17:25:52.354849504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.355177 kubelet[2392]: E0317 17:25:52.355119 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.355288 kubelet[2392]: E0317 17:25:52.355206 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:52.355288 kubelet[2392]: E0317 17:25:52.355244 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:52.355408 kubelet[2392]: E0317 17:25:52.355321 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:52.378783 containerd[1939]: time="2025-03-17T17:25:52.378483468Z" level=error msg="Failed to destroy network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.379473 containerd[1939]: time="2025-03-17T17:25:52.379287036Z" level=error msg="encountered an error cleaning up failed sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.379473 containerd[1939]: time="2025-03-17T17:25:52.379392156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.380741 kubelet[2392]: E0317 17:25:52.380231 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:52.380741 kubelet[2392]: E0317 17:25:52.380308 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:52.380741 kubelet[2392]: E0317 17:25:52.380342 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:52.381206 kubelet[2392]: E0317 17:25:52.380405 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-l46w9" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" Mar 17 17:25:52.861798 kubelet[2392]: E0317 17:25:52.861681 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:53.098833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae-shm.mount: Deactivated successfully. Mar 17 17:25:53.099235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241-shm.mount: Deactivated successfully. Mar 17 17:25:53.131736 kubelet[2392]: I0317 17:25:53.131496 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae" Mar 17 17:25:53.134468 containerd[1939]: time="2025-03-17T17:25:53.134052588Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:25:53.134468 containerd[1939]: time="2025-03-17T17:25:53.134330616Z" level=info msg="Ensure that sandbox a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae in task-service has been cleanup successfully" Mar 17 17:25:53.138668 containerd[1939]: time="2025-03-17T17:25:53.136996488Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:25:53.138668 containerd[1939]: time="2025-03-17T17:25:53.137072316Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:25:53.141561 containerd[1939]: time="2025-03-17T17:25:53.139032912Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:25:53.141561 containerd[1939]: time="2025-03-17T17:25:53.139198476Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:25:53.141561 containerd[1939]: time="2025-03-17T17:25:53.139221108Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:25:53.139946 systemd[1]: run-netns-cni\x2d065f6afe\x2def37\x2dcfbf\x2db63b\x2d9b30825eaadb.mount: Deactivated successfully. Mar 17 17:25:53.142160 containerd[1939]: time="2025-03-17T17:25:53.141677352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:2,}" Mar 17 17:25:53.145131 kubelet[2392]: I0317 17:25:53.145085 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241" Mar 17 17:25:53.146074 containerd[1939]: time="2025-03-17T17:25:53.145879956Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:25:53.146299 containerd[1939]: time="2025-03-17T17:25:53.146219364Z" level=info msg="Ensure that sandbox bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241 in task-service has been cleanup successfully" Mar 17 17:25:53.147241 containerd[1939]: time="2025-03-17T17:25:53.147184536Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:25:53.147241 containerd[1939]: time="2025-03-17T17:25:53.147231648Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:25:53.151159 systemd[1]: run-netns-cni\x2d568fbd28\x2d660d\x2dd8d4\x2d31e5\x2d9f34e4171e97.mount: Deactivated successfully. Mar 17 17:25:53.153119 containerd[1939]: time="2025-03-17T17:25:53.152154648Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:25:53.153819 containerd[1939]: time="2025-03-17T17:25:53.153768660Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:25:53.153986 containerd[1939]: time="2025-03-17T17:25:53.153816312Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:25:53.156078 containerd[1939]: time="2025-03-17T17:25:53.156017748Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:53.157457 containerd[1939]: time="2025-03-17T17:25:53.156599520Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:53.157457 containerd[1939]: time="2025-03-17T17:25:53.156630096Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:53.157982 containerd[1939]: time="2025-03-17T17:25:53.157882824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:3,}" Mar 17 17:25:53.382878 containerd[1939]: time="2025-03-17T17:25:53.382644469Z" level=error msg="Failed to destroy network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.385456 containerd[1939]: time="2025-03-17T17:25:53.385250101Z" level=error msg="encountered an error cleaning up failed sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.385456 containerd[1939]: time="2025-03-17T17:25:53.385364209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.388178 kubelet[2392]: E0317 17:25:53.387583 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.388178 kubelet[2392]: E0317 17:25:53.387665 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:53.388178 kubelet[2392]: E0317 17:25:53.387699 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:53.388471 kubelet[2392]: E0317 17:25:53.387786 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-l46w9" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" Mar 17 17:25:53.410925 containerd[1939]: time="2025-03-17T17:25:53.410837785Z" level=error msg="Failed to destroy network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.413882 containerd[1939]: time="2025-03-17T17:25:53.413805373Z" level=error msg="encountered an error cleaning up failed sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.414036 containerd[1939]: time="2025-03-17T17:25:53.413957569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.414536 kubelet[2392]: E0317 17:25:53.414245 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:53.414536 kubelet[2392]: E0317 17:25:53.414322 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:53.414536 kubelet[2392]: E0317 17:25:53.414365 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:53.414810 kubelet[2392]: E0317 17:25:53.414468 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:53.862264 kubelet[2392]: E0317 17:25:53.862125 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:54.103000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144-shm.mount: Deactivated successfully. Mar 17 17:25:54.103197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0-shm.mount: Deactivated successfully. Mar 17 17:25:54.156675 kubelet[2392]: I0317 17:25:54.156629 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144" Mar 17 17:25:54.158181 containerd[1939]: time="2025-03-17T17:25:54.158007601Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:25:54.158758 containerd[1939]: time="2025-03-17T17:25:54.158328757Z" level=info msg="Ensure that sandbox b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144 in task-service has been cleanup successfully" Mar 17 17:25:54.168121 systemd[1]: run-netns-cni\x2da0741a28\x2db889\x2d705b\x2def13\x2de69d8c07f225.mount: Deactivated successfully. Mar 17 17:25:54.169903 containerd[1939]: time="2025-03-17T17:25:54.168501997Z" level=info msg="TearDown network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" successfully" Mar 17 17:25:54.169903 containerd[1939]: time="2025-03-17T17:25:54.168545989Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" returns successfully" Mar 17 17:25:54.171745 containerd[1939]: time="2025-03-17T17:25:54.171696529Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:25:54.172110 containerd[1939]: time="2025-03-17T17:25:54.172076881Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:25:54.172243 containerd[1939]: time="2025-03-17T17:25:54.172215229Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:25:54.172562 kubelet[2392]: I0317 17:25:54.172319 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0" Mar 17 17:25:54.173580 containerd[1939]: time="2025-03-17T17:25:54.173289253Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:25:54.174778 containerd[1939]: time="2025-03-17T17:25:54.174376861Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:25:54.174778 containerd[1939]: time="2025-03-17T17:25:54.174420673Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:25:54.176328 containerd[1939]: time="2025-03-17T17:25:54.175730809Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:25:54.176328 containerd[1939]: time="2025-03-17T17:25:54.176058721Z" level=info msg="Ensure that sandbox 7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0 in task-service has been cleanup successfully" Mar 17 17:25:54.176942 containerd[1939]: time="2025-03-17T17:25:54.176806981Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:54.177143 containerd[1939]: time="2025-03-17T17:25:54.177070105Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:54.177143 containerd[1939]: time="2025-03-17T17:25:54.177101293Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:54.180165 containerd[1939]: time="2025-03-17T17:25:54.179865061Z" level=info msg="TearDown network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" successfully" Mar 17 17:25:54.180165 containerd[1939]: time="2025-03-17T17:25:54.180048385Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" returns successfully" Mar 17 17:25:54.181787 containerd[1939]: time="2025-03-17T17:25:54.181727557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:4,}" Mar 17 17:25:54.183346 containerd[1939]: time="2025-03-17T17:25:54.182746381Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:25:54.183346 containerd[1939]: time="2025-03-17T17:25:54.182911453Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:25:54.183346 containerd[1939]: time="2025-03-17T17:25:54.182933641Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:25:54.182816 systemd[1]: run-netns-cni\x2da7b418fc\x2d39ac\x2d6f4e\x2dad5c\x2dffbd9ce70a63.mount: Deactivated successfully. Mar 17 17:25:54.186461 containerd[1939]: time="2025-03-17T17:25:54.186190441Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:25:54.188135 containerd[1939]: time="2025-03-17T17:25:54.188072917Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:25:54.188135 containerd[1939]: time="2025-03-17T17:25:54.188117473Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:25:54.189848 containerd[1939]: time="2025-03-17T17:25:54.189657313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:3,}" Mar 17 17:25:54.404114 containerd[1939]: time="2025-03-17T17:25:54.403954574Z" level=error msg="Failed to destroy network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.406465 containerd[1939]: time="2025-03-17T17:25:54.406265714Z" level=error msg="encountered an error cleaning up failed sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.408225 containerd[1939]: time="2025-03-17T17:25:54.407963702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.409663 kubelet[2392]: E0317 17:25:54.408988 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.409663 kubelet[2392]: E0317 17:25:54.409068 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:54.409663 kubelet[2392]: E0317 17:25:54.409102 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:54.409939 kubelet[2392]: E0317 17:25:54.409175 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-l46w9" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" Mar 17 17:25:54.425166 containerd[1939]: time="2025-03-17T17:25:54.425092322Z" level=error msg="Failed to destroy network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.426598 containerd[1939]: time="2025-03-17T17:25:54.426299726Z" level=error msg="encountered an error cleaning up failed sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.426598 containerd[1939]: time="2025-03-17T17:25:54.426422258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.426839 kubelet[2392]: E0317 17:25:54.426801 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:54.426913 kubelet[2392]: E0317 17:25:54.426872 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:54.426968 kubelet[2392]: E0317 17:25:54.426907 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:54.427028 kubelet[2392]: E0317 17:25:54.426968 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:54.863830 kubelet[2392]: E0317 17:25:54.863343 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:55.101664 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55-shm.mount: Deactivated successfully. Mar 17 17:25:55.182997 kubelet[2392]: I0317 17:25:55.181955 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55" Mar 17 17:25:55.183171 containerd[1939]: time="2025-03-17T17:25:55.183031550Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" Mar 17 17:25:55.185747 containerd[1939]: time="2025-03-17T17:25:55.183357002Z" level=info msg="Ensure that sandbox dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55 in task-service has been cleanup successfully" Mar 17 17:25:55.185747 containerd[1939]: time="2025-03-17T17:25:55.185085182Z" level=info msg="TearDown network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" successfully" Mar 17 17:25:55.185747 containerd[1939]: time="2025-03-17T17:25:55.185129630Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" returns successfully" Mar 17 17:25:55.187899 containerd[1939]: time="2025-03-17T17:25:55.187781498Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:25:55.188041 containerd[1939]: time="2025-03-17T17:25:55.187960118Z" level=info msg="TearDown network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" successfully" Mar 17 17:25:55.188041 containerd[1939]: time="2025-03-17T17:25:55.187985822Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" returns successfully" Mar 17 17:25:55.189672 containerd[1939]: time="2025-03-17T17:25:55.189159758Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:25:55.190059 systemd[1]: run-netns-cni\x2d5ddd174b\x2d8a5a\x2d7687\x2da25b\x2dab9e2725e9ae.mount: Deactivated successfully. Mar 17 17:25:55.191797 containerd[1939]: time="2025-03-17T17:25:55.191050082Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:25:55.191797 containerd[1939]: time="2025-03-17T17:25:55.191367818Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:25:55.194187 containerd[1939]: time="2025-03-17T17:25:55.193996370Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:25:55.194187 containerd[1939]: time="2025-03-17T17:25:55.194176346Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:25:55.194374 containerd[1939]: time="2025-03-17T17:25:55.194199410Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:25:55.195273 containerd[1939]: time="2025-03-17T17:25:55.194999690Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:55.195273 containerd[1939]: time="2025-03-17T17:25:55.195164642Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:55.195273 containerd[1939]: time="2025-03-17T17:25:55.195190358Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:55.197740 containerd[1939]: time="2025-03-17T17:25:55.197303918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:5,}" Mar 17 17:25:55.199233 kubelet[2392]: I0317 17:25:55.199195 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36" Mar 17 17:25:55.200423 containerd[1939]: time="2025-03-17T17:25:55.200057450Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" Mar 17 17:25:55.200423 containerd[1939]: time="2025-03-17T17:25:55.200344982Z" level=info msg="Ensure that sandbox afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36 in task-service has been cleanup successfully" Mar 17 17:25:55.205421 containerd[1939]: time="2025-03-17T17:25:55.205177850Z" level=info msg="TearDown network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" successfully" Mar 17 17:25:55.205421 containerd[1939]: time="2025-03-17T17:25:55.205228262Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" returns successfully" Mar 17 17:25:55.207289 systemd[1]: run-netns-cni\x2d3eaeee78\x2d6081\x2d2f23\x2d08f9\x2d3a4e1f5e4edf.mount: Deactivated successfully. Mar 17 17:25:55.208594 containerd[1939]: time="2025-03-17T17:25:55.208311254Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:25:55.208594 containerd[1939]: time="2025-03-17T17:25:55.208498202Z" level=info msg="TearDown network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" successfully" Mar 17 17:25:55.208594 containerd[1939]: time="2025-03-17T17:25:55.208521134Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" returns successfully" Mar 17 17:25:55.212783 containerd[1939]: time="2025-03-17T17:25:55.211655942Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:25:55.212933 containerd[1939]: time="2025-03-17T17:25:55.212909198Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:25:55.213222 containerd[1939]: time="2025-03-17T17:25:55.212936702Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:25:55.213810 containerd[1939]: time="2025-03-17T17:25:55.213535646Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:25:55.213810 containerd[1939]: time="2025-03-17T17:25:55.213712826Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:25:55.213810 containerd[1939]: time="2025-03-17T17:25:55.213735386Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:25:55.215332 containerd[1939]: time="2025-03-17T17:25:55.215265962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:4,}" Mar 17 17:25:55.387016 containerd[1939]: time="2025-03-17T17:25:55.386586651Z" level=error msg="Failed to destroy network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.388502 containerd[1939]: time="2025-03-17T17:25:55.388228851Z" level=error msg="encountered an error cleaning up failed sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.388502 containerd[1939]: time="2025-03-17T17:25:55.388340775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.389243 kubelet[2392]: E0317 17:25:55.389092 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.389243 kubelet[2392]: E0317 17:25:55.389181 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:55.389243 kubelet[2392]: E0317 17:25:55.389219 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:55.389508 kubelet[2392]: E0317 17:25:55.389290 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-l46w9" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" Mar 17 17:25:55.428534 containerd[1939]: time="2025-03-17T17:25:55.427677411Z" level=error msg="Failed to destroy network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.428534 containerd[1939]: time="2025-03-17T17:25:55.428279691Z" level=error msg="encountered an error cleaning up failed sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.428534 containerd[1939]: time="2025-03-17T17:25:55.428376831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.429083 kubelet[2392]: E0317 17:25:55.429017 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:55.429194 kubelet[2392]: E0317 17:25:55.429109 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:55.429194 kubelet[2392]: E0317 17:25:55.429147 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:55.429316 kubelet[2392]: E0317 17:25:55.429215 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:55.864128 kubelet[2392]: E0317 17:25:55.864081 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:56.104066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14-shm.mount: Deactivated successfully. Mar 17 17:25:56.104252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e-shm.mount: Deactivated successfully. Mar 17 17:25:56.214463 kubelet[2392]: I0317 17:25:56.210613 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e" Mar 17 17:25:56.214622 containerd[1939]: time="2025-03-17T17:25:56.211865931Z" level=info msg="StopPodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\"" Mar 17 17:25:56.214622 containerd[1939]: time="2025-03-17T17:25:56.212141895Z" level=info msg="Ensure that sandbox f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e in task-service has been cleanup successfully" Mar 17 17:25:56.215837 containerd[1939]: time="2025-03-17T17:25:56.215779839Z" level=info msg="TearDown network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" successfully" Mar 17 17:25:56.215837 containerd[1939]: time="2025-03-17T17:25:56.215829303Z" level=info msg="StopPodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" returns successfully" Mar 17 17:25:56.217208 systemd[1]: run-netns-cni\x2d08b4c8f5\x2d74a6\x2ddf88\x2dddc0\x2d7d483577472e.mount: Deactivated successfully. Mar 17 17:25:56.217688 containerd[1939]: time="2025-03-17T17:25:56.217237527Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" Mar 17 17:25:56.217688 containerd[1939]: time="2025-03-17T17:25:56.217400847Z" level=info msg="TearDown network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" successfully" Mar 17 17:25:56.217688 containerd[1939]: time="2025-03-17T17:25:56.217424727Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" returns successfully" Mar 17 17:25:56.219217 containerd[1939]: time="2025-03-17T17:25:56.218691543Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:25:56.219217 containerd[1939]: time="2025-03-17T17:25:56.218861919Z" level=info msg="TearDown network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" successfully" Mar 17 17:25:56.219217 containerd[1939]: time="2025-03-17T17:25:56.218884827Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" returns successfully" Mar 17 17:25:56.221197 containerd[1939]: time="2025-03-17T17:25:56.220752291Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:25:56.221197 containerd[1939]: time="2025-03-17T17:25:56.220933215Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:25:56.221197 containerd[1939]: time="2025-03-17T17:25:56.220956051Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:25:56.222919 containerd[1939]: time="2025-03-17T17:25:56.222861891Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:25:56.223046 containerd[1939]: time="2025-03-17T17:25:56.223026591Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:25:56.223105 containerd[1939]: time="2025-03-17T17:25:56.223049523Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:25:56.224218 containerd[1939]: time="2025-03-17T17:25:56.223953315Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:56.224218 containerd[1939]: time="2025-03-17T17:25:56.224114475Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:56.224218 containerd[1939]: time="2025-03-17T17:25:56.224136855Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:56.225185 kubelet[2392]: I0317 17:25:56.224812 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14" Mar 17 17:25:56.226689 containerd[1939]: time="2025-03-17T17:25:56.226631571Z" level=info msg="StopPodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\"" Mar 17 17:25:56.227270 containerd[1939]: time="2025-03-17T17:25:56.226901979Z" level=info msg="Ensure that sandbox 40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14 in task-service has been cleanup successfully" Mar 17 17:25:56.227378 containerd[1939]: time="2025-03-17T17:25:56.227302923Z" level=info msg="TearDown network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" successfully" Mar 17 17:25:56.227378 containerd[1939]: time="2025-03-17T17:25:56.227328699Z" level=info msg="StopPodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" returns successfully" Mar 17 17:25:56.230530 containerd[1939]: time="2025-03-17T17:25:56.229793187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:6,}" Mar 17 17:25:56.231485 containerd[1939]: time="2025-03-17T17:25:56.231376203Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" Mar 17 17:25:56.232481 containerd[1939]: time="2025-03-17T17:25:56.231608247Z" level=info msg="TearDown network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" successfully" Mar 17 17:25:56.232481 containerd[1939]: time="2025-03-17T17:25:56.231641271Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" returns successfully" Mar 17 17:25:56.233051 systemd[1]: run-netns-cni\x2df920fab9\x2db0c3\x2da318\x2d8f47\x2dc7d55693c400.mount: Deactivated successfully. Mar 17 17:25:56.234709 containerd[1939]: time="2025-03-17T17:25:56.234151935Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:25:56.236402 containerd[1939]: time="2025-03-17T17:25:56.235635315Z" level=info msg="TearDown network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" successfully" Mar 17 17:25:56.236402 containerd[1939]: time="2025-03-17T17:25:56.235681143Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" returns successfully" Mar 17 17:25:56.237638 containerd[1939]: time="2025-03-17T17:25:56.237412719Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:25:56.239288 containerd[1939]: time="2025-03-17T17:25:56.239234043Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:25:56.240928 containerd[1939]: time="2025-03-17T17:25:56.239937759Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:25:56.240928 containerd[1939]: time="2025-03-17T17:25:56.240753483Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:25:56.241603 containerd[1939]: time="2025-03-17T17:25:56.240926787Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:25:56.241603 containerd[1939]: time="2025-03-17T17:25:56.240951111Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:25:56.242937 containerd[1939]: time="2025-03-17T17:25:56.242785947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:5,}" Mar 17 17:25:56.379942 containerd[1939]: time="2025-03-17T17:25:56.379713028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:56.383075 containerd[1939]: time="2025-03-17T17:25:56.382945168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 17 17:25:56.385404 containerd[1939]: time="2025-03-17T17:25:56.385270384Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:56.393076 containerd[1939]: time="2025-03-17T17:25:56.392985808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:56.394602 containerd[1939]: time="2025-03-17T17:25:56.394534660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 6.301130551s" Mar 17 17:25:56.394602 containerd[1939]: time="2025-03-17T17:25:56.394598080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 17 17:25:56.413025 containerd[1939]: time="2025-03-17T17:25:56.412951024Z" level=info msg="CreateContainer within sandbox \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:25:56.439073 containerd[1939]: time="2025-03-17T17:25:56.438991972Z" level=info msg="CreateContainer within sandbox \"cc7ac7a4691cf640a2c66dec36f4bf68514f8a4b90b6c58400d73b9d92786788\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b1b1d1ca497f7537b33cad67e21a9f99cfb55ba415b7a3960a2c77ad2a04949\"" Mar 17 17:25:56.440111 containerd[1939]: time="2025-03-17T17:25:56.440044060Z" level=info msg="StartContainer for \"8b1b1d1ca497f7537b33cad67e21a9f99cfb55ba415b7a3960a2c77ad2a04949\"" Mar 17 17:25:56.448882 containerd[1939]: time="2025-03-17T17:25:56.448799021Z" level=error msg="Failed to destroy network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.451143 containerd[1939]: time="2025-03-17T17:25:56.451039493Z" level=error msg="encountered an error cleaning up failed sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.451273 containerd[1939]: time="2025-03-17T17:25:56.451171133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.451880 containerd[1939]: time="2025-03-17T17:25:56.451584137Z" level=error msg="Failed to destroy network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.452527 kubelet[2392]: E0317 17:25:56.451995 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.452527 kubelet[2392]: E0317 17:25:56.452070 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:56.452527 kubelet[2392]: E0317 17:25:56.452103 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-l46w9" Mar 17 17:25:56.452729 kubelet[2392]: E0317 17:25:56.452181 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-l46w9_default(45d1b902-7741-4329-b5f7-6883992bfb94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-l46w9" podUID="45d1b902-7741-4329-b5f7-6883992bfb94" Mar 17 17:25:56.453590 containerd[1939]: time="2025-03-17T17:25:56.453281453Z" level=error msg="encountered an error cleaning up failed sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.453590 containerd[1939]: time="2025-03-17T17:25:56.453379733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.454910 kubelet[2392]: E0317 17:25:56.454673 2392 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:25:56.454910 kubelet[2392]: E0317 17:25:56.454746 2392 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:56.454910 kubelet[2392]: E0317 17:25:56.454781 2392 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjw9g" Mar 17 17:25:56.455161 kubelet[2392]: E0317 17:25:56.454838 2392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjw9g_calico-system(9a0fa576-dbd1-4a42-9dcd-41e781c08c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjw9g" podUID="9a0fa576-dbd1-4a42-9dcd-41e781c08c82" Mar 17 17:25:56.493762 systemd[1]: Started cri-containerd-8b1b1d1ca497f7537b33cad67e21a9f99cfb55ba415b7a3960a2c77ad2a04949.scope - libcontainer container 8b1b1d1ca497f7537b33cad67e21a9f99cfb55ba415b7a3960a2c77ad2a04949. Mar 17 17:25:56.556572 containerd[1939]: time="2025-03-17T17:25:56.556490921Z" level=info msg="StartContainer for \"8b1b1d1ca497f7537b33cad67e21a9f99cfb55ba415b7a3960a2c77ad2a04949\" returns successfully" Mar 17 17:25:56.668611 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:25:56.668954 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:25:56.851072 kubelet[2392]: E0317 17:25:56.850556 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:56.865268 kubelet[2392]: E0317 17:25:56.865164 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:57.105487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188-shm.mount: Deactivated successfully. Mar 17 17:25:57.105675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259020985.mount: Deactivated successfully. Mar 17 17:25:57.248759 kubelet[2392]: I0317 17:25:57.248720 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188" Mar 17 17:25:57.252035 containerd[1939]: time="2025-03-17T17:25:57.251529388Z" level=info msg="StopPodSandbox for \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\"" Mar 17 17:25:57.253477 containerd[1939]: time="2025-03-17T17:25:57.252931324Z" level=info msg="Ensure that sandbox eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188 in task-service has been cleanup successfully" Mar 17 17:25:57.253878 containerd[1939]: time="2025-03-17T17:25:57.253653916Z" level=info msg="TearDown network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\" successfully" Mar 17 17:25:57.253878 containerd[1939]: time="2025-03-17T17:25:57.253695076Z" level=info msg="StopPodSandbox for \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\" returns successfully" Mar 17 17:25:57.255113 containerd[1939]: time="2025-03-17T17:25:57.254840093Z" level=info msg="StopPodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\"" Mar 17 17:25:57.255113 containerd[1939]: time="2025-03-17T17:25:57.254998097Z" level=info msg="TearDown network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" successfully" Mar 17 17:25:57.255113 containerd[1939]: time="2025-03-17T17:25:57.255023189Z" level=info msg="StopPodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" returns successfully" Mar 17 17:25:57.257366 containerd[1939]: time="2025-03-17T17:25:57.256198961Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" Mar 17 17:25:57.257366 containerd[1939]: time="2025-03-17T17:25:57.256339985Z" level=info msg="TearDown network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" successfully" Mar 17 17:25:57.257366 containerd[1939]: time="2025-03-17T17:25:57.256361213Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" returns successfully" Mar 17 17:25:57.258702 containerd[1939]: time="2025-03-17T17:25:57.258420401Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:25:57.259074 containerd[1939]: time="2025-03-17T17:25:57.258621221Z" level=info msg="TearDown network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" successfully" Mar 17 17:25:57.260103 kubelet[2392]: I0317 17:25:57.259500 2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759" Mar 17 17:25:57.260376 containerd[1939]: time="2025-03-17T17:25:57.260300561Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" returns successfully" Mar 17 17:25:57.261868 systemd[1]: run-netns-cni\x2d56e8003e\x2d4937\x2db03c\x2d758f\x2d1cb4330a44d3.mount: Deactivated successfully. Mar 17 17:25:57.262726 containerd[1939]: time="2025-03-17T17:25:57.262661201Z" level=info msg="StopPodSandbox for \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\"" Mar 17 17:25:57.265227 containerd[1939]: time="2025-03-17T17:25:57.264530885Z" level=info msg="Ensure that sandbox 87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759 in task-service has been cleanup successfully" Mar 17 17:25:57.265227 containerd[1939]: time="2025-03-17T17:25:57.264945269Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:25:57.265227 containerd[1939]: time="2025-03-17T17:25:57.265083413Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:25:57.265227 containerd[1939]: time="2025-03-17T17:25:57.265105949Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:25:57.268027 containerd[1939]: time="2025-03-17T17:25:57.267645197Z" level=info msg="TearDown network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\" successfully" Mar 17 17:25:57.268027 containerd[1939]: time="2025-03-17T17:25:57.267696161Z" level=info msg="StopPodSandbox for \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\" returns successfully" Mar 17 17:25:57.269977 containerd[1939]: time="2025-03-17T17:25:57.268855337Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:25:57.269977 containerd[1939]: time="2025-03-17T17:25:57.269025437Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:25:57.269977 containerd[1939]: time="2025-03-17T17:25:57.269048441Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:25:57.268351 systemd[1]: run-netns-cni\x2dd692759a\x2d963b\x2d345d\x2d0af9\x2dbca0a5aec6d8.mount: Deactivated successfully. Mar 17 17:25:57.272192 containerd[1939]: time="2025-03-17T17:25:57.271886645Z" level=info msg="StopPodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\"" Mar 17 17:25:57.272192 containerd[1939]: time="2025-03-17T17:25:57.272059541Z" level=info msg="TearDown network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" successfully" Mar 17 17:25:57.272192 containerd[1939]: time="2025-03-17T17:25:57.272082929Z" level=info msg="StopPodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" returns successfully" Mar 17 17:25:57.272509 containerd[1939]: time="2025-03-17T17:25:57.272205593Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:25:57.272509 containerd[1939]: time="2025-03-17T17:25:57.272325281Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:25:57.272509 containerd[1939]: time="2025-03-17T17:25:57.272346437Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:25:57.273747 containerd[1939]: time="2025-03-17T17:25:57.273693317Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" Mar 17 17:25:57.274109 containerd[1939]: time="2025-03-17T17:25:57.274078397Z" level=info msg="TearDown network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" successfully" Mar 17 17:25:57.274278 containerd[1939]: time="2025-03-17T17:25:57.274249277Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" returns successfully" Mar 17 17:25:57.275191 containerd[1939]: time="2025-03-17T17:25:57.275135789Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:25:57.275878 containerd[1939]: time="2025-03-17T17:25:57.275623529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:7,}" Mar 17 17:25:57.276446 containerd[1939]: time="2025-03-17T17:25:57.276304925Z" level=info msg="TearDown network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" successfully" Mar 17 17:25:57.276446 containerd[1939]: time="2025-03-17T17:25:57.276379097Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" returns successfully" Mar 17 17:25:57.277381 containerd[1939]: time="2025-03-17T17:25:57.277313453Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:25:57.277777 containerd[1939]: time="2025-03-17T17:25:57.277741841Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:25:57.277961 containerd[1939]: time="2025-03-17T17:25:57.277886309Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:25:57.279029 containerd[1939]: time="2025-03-17T17:25:57.278983013Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:25:57.279752 containerd[1939]: time="2025-03-17T17:25:57.279599585Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:25:57.279752 containerd[1939]: time="2025-03-17T17:25:57.279660653Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:25:57.281687 containerd[1939]: time="2025-03-17T17:25:57.281637425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:6,}" Mar 17 17:25:57.304380 systemd[1]: run-containerd-runc-k8s.io-8b1b1d1ca497f7537b33cad67e21a9f99cfb55ba415b7a3960a2c77ad2a04949-runc.eshcIx.mount: Deactivated successfully. Mar 17 17:25:57.330061 kubelet[2392]: I0317 17:25:57.329977 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hftv6" podStartSLOduration=4.290161646 podStartE2EDuration="20.329932781s" podCreationTimestamp="2025-03-17 17:25:37 +0000 UTC" firstStartedPulling="2025-03-17 17:25:40.357294865 +0000 UTC m=+4.479976284" lastFinishedPulling="2025-03-17 17:25:56.397065988 +0000 UTC m=+20.519747419" observedRunningTime="2025-03-17 17:25:57.327993077 +0000 UTC m=+21.450674520" watchObservedRunningTime="2025-03-17 17:25:57.329932781 +0000 UTC m=+21.452614236" Mar 17 17:25:57.727620 (udev-worker)[3305]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:57.729128 systemd-networkd[1849]: cali58906009a0c: Link UP Mar 17 17:25:57.730769 systemd-networkd[1849]: cali58906009a0c: Gained carrier Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.389 [INFO][3335] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.496 [INFO][3335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.142-k8s-csi--node--driver--xjw9g-eth0 csi-node-driver- calico-system 9a0fa576-dbd1-4a42-9dcd-41e781c08c82 877 0 2025-03-17 17:25:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.28.142 csi-node-driver-xjw9g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali58906009a0c [] []}} ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.497 [INFO][3335] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.578 [INFO][3366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" HandleID="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Workload="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.611 [INFO][3366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" HandleID="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Workload="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003199c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.28.142", "pod":"csi-node-driver-xjw9g", "timestamp":"2025-03-17 17:25:57.578186574 +0000 UTC"}, Hostname:"172.31.28.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.611 [INFO][3366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.611 [INFO][3366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.611 [INFO][3366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.142' Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.617 [INFO][3366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.641 [INFO][3366] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.653 [INFO][3366] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.657 [INFO][3366] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.662 [INFO][3366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.662 [INFO][3366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.666 [INFO][3366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.685 [INFO][3366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.707 [INFO][3366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.129/26] block=192.168.86.128/26 handle="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.707 [INFO][3366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.129/26] handle="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" host="172.31.28.142" Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.707 [INFO][3366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:25:57.761558 containerd[1939]: 2025-03-17 17:25:57.707 [INFO][3366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.129/26] IPv6=[] ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" HandleID="k8s-pod-network.c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Workload="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.762776 containerd[1939]: 2025-03-17 17:25:57.713 [INFO][3335] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-csi--node--driver--xjw9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a0fa576-dbd1-4a42-9dcd-41e781c08c82", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"", Pod:"csi-node-driver-xjw9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58906009a0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:25:57.762776 containerd[1939]: 2025-03-17 17:25:57.713 [INFO][3335] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.129/32] ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.762776 containerd[1939]: 2025-03-17 17:25:57.714 [INFO][3335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58906009a0c ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.762776 containerd[1939]: 2025-03-17 17:25:57.730 [INFO][3335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.762776 containerd[1939]: 2025-03-17 17:25:57.732 [INFO][3335] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-csi--node--driver--xjw9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a0fa576-dbd1-4a42-9dcd-41e781c08c82", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b", Pod:"csi-node-driver-xjw9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58906009a0c", MAC:"fa:d4:61:5b:9e:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:25:57.762776 containerd[1939]: 2025-03-17 17:25:57.758 [INFO][3335] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b" Namespace="calico-system" Pod="csi-node-driver-xjw9g" WorkloadEndpoint="172.31.28.142-k8s-csi--node--driver--xjw9g-eth0" Mar 17 17:25:57.797108 containerd[1939]: time="2025-03-17T17:25:57.796943935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:57.797673 containerd[1939]: time="2025-03-17T17:25:57.797518411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:57.797773 containerd[1939]: time="2025-03-17T17:25:57.797728999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:57.798861 containerd[1939]: time="2025-03-17T17:25:57.798669619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:57.819151 systemd-networkd[1849]: cali127ad2a7329: Link UP Mar 17 17:25:57.819562 systemd-networkd[1849]: cali127ad2a7329: Gained carrier Mar 17 17:25:57.840046 systemd[1]: Started cri-containerd-c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b.scope - libcontainer container c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b. Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.418 [INFO][3345] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.502 [INFO][3345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0 nginx-deployment-85f456d6dd- default 45d1b902-7741-4329-b5f7-6883992bfb94 1054 0 2025-03-17 17:25:50 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.28.142 nginx-deployment-85f456d6dd-l46w9 eth0 default [] [] [kns.default ksa.default.default] cali127ad2a7329 [] []}} ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.503 [INFO][3345] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.600 [INFO][3374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" HandleID="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Workload="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.642 [INFO][3374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" HandleID="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Workload="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000313d70), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.142", "pod":"nginx-deployment-85f456d6dd-l46w9", "timestamp":"2025-03-17 17:25:57.60055365 +0000 UTC"}, Hostname:"172.31.28.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.642 [INFO][3374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.707 [INFO][3374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.707 [INFO][3374] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.142' Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.717 [INFO][3374] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.728 [INFO][3374] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.745 [INFO][3374] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.751 [INFO][3374] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.757 [INFO][3374] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.757 [INFO][3374] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.776 [INFO][3374] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31 Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.789 [INFO][3374] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.807 [INFO][3374] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.130/26] block=192.168.86.128/26 handle="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.807 [INFO][3374] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.130/26] handle="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" host="172.31.28.142" Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.807 [INFO][3374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:25:57.848935 containerd[1939]: 2025-03-17 17:25:57.807 [INFO][3374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.130/26] IPv6=[] ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" HandleID="k8s-pod-network.58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Workload="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.850155 containerd[1939]: 2025-03-17 17:25:57.812 [INFO][3345] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"45d1b902-7741-4329-b5f7-6883992bfb94", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-l46w9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali127ad2a7329", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:25:57.850155 containerd[1939]: 2025-03-17 17:25:57.813 [INFO][3345] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.130/32] ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.850155 containerd[1939]: 2025-03-17 17:25:57.813 [INFO][3345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali127ad2a7329 ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.850155 containerd[1939]: 2025-03-17 17:25:57.819 [INFO][3345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.850155 containerd[1939]: 2025-03-17 17:25:57.820 [INFO][3345] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"45d1b902-7741-4329-b5f7-6883992bfb94", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31", Pod:"nginx-deployment-85f456d6dd-l46w9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali127ad2a7329", MAC:"32:88:21:b2:35:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:25:57.850155 containerd[1939]: 2025-03-17 17:25:57.840 [INFO][3345] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31" Namespace="default" Pod="nginx-deployment-85f456d6dd-l46w9" WorkloadEndpoint="172.31.28.142-k8s-nginx--deployment--85f456d6dd--l46w9-eth0" Mar 17 17:25:57.865925 kubelet[2392]: E0317 17:25:57.865832 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:57.914332 containerd[1939]: time="2025-03-17T17:25:57.914272304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjw9g,Uid:9a0fa576-dbd1-4a42-9dcd-41e781c08c82,Namespace:calico-system,Attempt:7,} returns sandbox id \"c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b\"" Mar 17 17:25:57.918019 containerd[1939]: time="2025-03-17T17:25:57.917843060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:25:57.926302 containerd[1939]: time="2025-03-17T17:25:57.925926464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:57.926302 containerd[1939]: time="2025-03-17T17:25:57.926021936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:57.926302 containerd[1939]: time="2025-03-17T17:25:57.926065220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:57.926302 containerd[1939]: time="2025-03-17T17:25:57.926221400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:57.958775 systemd[1]: Started cri-containerd-58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31.scope - libcontainer container 58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31. Mar 17 17:25:58.021834 containerd[1939]: time="2025-03-17T17:25:58.021778960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l46w9,Uid:45d1b902-7741-4329-b5f7-6883992bfb94,Namespace:default,Attempt:6,} returns sandbox id \"58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31\"" Mar 17 17:25:58.866895 kubelet[2392]: E0317 17:25:58.866818 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:25:59.021650 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:25:59.096145 systemd-networkd[1849]: cali58906009a0c: Gained IPv6LL Mar 17 17:25:59.113589 kernel: bpftool[3639]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:25:59.479934 systemd-networkd[1849]: cali127ad2a7329: Gained IPv6LL Mar 17 17:25:59.502212 (udev-worker)[3306]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:59.503478 systemd-networkd[1849]: vxlan.calico: Link UP Mar 17 17:25:59.503498 systemd-networkd[1849]: vxlan.calico: Gained carrier Mar 17 17:25:59.630381 containerd[1939]: time="2025-03-17T17:25:59.630285644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:59.632924 containerd[1939]: time="2025-03-17T17:25:59.632803112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 17 17:25:59.635994 containerd[1939]: time="2025-03-17T17:25:59.635883776Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:59.643471 containerd[1939]: time="2025-03-17T17:25:59.641766644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:59.643471 containerd[1939]: time="2025-03-17T17:25:59.643223456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.725317468s" Mar 17 17:25:59.643471 containerd[1939]: time="2025-03-17T17:25:59.643261556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 17 17:25:59.647804 containerd[1939]: time="2025-03-17T17:25:59.647707100Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:25:59.652219 containerd[1939]: time="2025-03-17T17:25:59.652167740Z" level=info msg="CreateContainer within sandbox \"c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:25:59.678050 containerd[1939]: time="2025-03-17T17:25:59.676908489Z" level=info msg="CreateContainer within sandbox \"c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f68107bdaffd1a858757de98cffafd6f7a0a3ac8e72e1f2b2984ab13dc2810e7\"" Mar 17 17:25:59.685584 containerd[1939]: time="2025-03-17T17:25:59.685535037Z" level=info msg="StartContainer for \"f68107bdaffd1a858757de98cffafd6f7a0a3ac8e72e1f2b2984ab13dc2810e7\"" Mar 17 17:25:59.745762 systemd[1]: Started cri-containerd-f68107bdaffd1a858757de98cffafd6f7a0a3ac8e72e1f2b2984ab13dc2810e7.scope - libcontainer container f68107bdaffd1a858757de98cffafd6f7a0a3ac8e72e1f2b2984ab13dc2810e7. Mar 17 17:25:59.815575 containerd[1939]: time="2025-03-17T17:25:59.815414109Z" level=info msg="StartContainer for \"f68107bdaffd1a858757de98cffafd6f7a0a3ac8e72e1f2b2984ab13dc2810e7\" returns successfully" Mar 17 17:25:59.867561 kubelet[2392]: E0317 17:25:59.867151 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:00.760072 systemd-networkd[1849]: vxlan.calico: Gained IPv6LL Mar 17 17:26:00.867563 kubelet[2392]: E0317 17:26:00.867482 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:01.868590 kubelet[2392]: E0317 17:26:01.868520 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:02.868833 kubelet[2392]: E0317 17:26:02.868710 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:03.281666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452360012.mount: Deactivated successfully. Mar 17 17:26:03.309529 ntpd[1908]: Listen normally on 8 vxlan.calico 192.168.86.128:123 Mar 17 17:26:03.312465 ntpd[1908]: 17 Mar 17:26:03 ntpd[1908]: Listen normally on 8 vxlan.calico 192.168.86.128:123 Mar 17 17:26:03.312465 ntpd[1908]: 17 Mar 17:26:03 ntpd[1908]: Listen normally on 9 cali58906009a0c [fe80::ecee:eeff:feee:eeee%3]:123 Mar 17 17:26:03.312465 ntpd[1908]: 17 Mar 17:26:03 ntpd[1908]: Listen normally on 10 cali127ad2a7329 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 17 17:26:03.312465 ntpd[1908]: 17 Mar 17:26:03 ntpd[1908]: Listen normally on 11 vxlan.calico [fe80::64c4:90ff:fe40:8644%5]:123 Mar 17 17:26:03.310680 ntpd[1908]: Listen normally on 9 cali58906009a0c [fe80::ecee:eeff:feee:eeee%3]:123 Mar 17 17:26:03.310771 ntpd[1908]: Listen normally on 10 cali127ad2a7329 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 17 17:26:03.310839 ntpd[1908]: Listen normally on 11 vxlan.calico [fe80::64c4:90ff:fe40:8644%5]:123 Mar 17 17:26:03.869082 kubelet[2392]: E0317 17:26:03.869037 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:04.677768 containerd[1939]: time="2025-03-17T17:26:04.677698345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:04.679743 containerd[1939]: time="2025-03-17T17:26:04.679652797Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69703867" Mar 17 17:26:04.680858 containerd[1939]: time="2025-03-17T17:26:04.680758345Z" level=info msg="ImageCreate event name:\"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:04.686392 containerd[1939]: time="2025-03-17T17:26:04.686292373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:04.688954 containerd[1939]: time="2025-03-17T17:26:04.688734769Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 5.040692113s" Mar 17 17:26:04.688954 containerd[1939]: time="2025-03-17T17:26:04.688804741Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:26:04.691530 containerd[1939]: time="2025-03-17T17:26:04.691296241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:26:04.695527 containerd[1939]: time="2025-03-17T17:26:04.695243701Z" level=info msg="CreateContainer within sandbox \"58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:26:04.716732 containerd[1939]: time="2025-03-17T17:26:04.716593898Z" level=info msg="CreateContainer within sandbox \"58e0620c44b4abedecabd21343a423142f838befcc63bf3a90f41eaf11006b31\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"393f87be7afc14b3b3f1f20c294f3be85e2d3d9240b06a41745e39d6fe348b85\"" Mar 17 17:26:04.717691 containerd[1939]: time="2025-03-17T17:26:04.717643526Z" level=info msg="StartContainer for \"393f87be7afc14b3b3f1f20c294f3be85e2d3d9240b06a41745e39d6fe348b85\"" Mar 17 17:26:04.778777 systemd[1]: Started cri-containerd-393f87be7afc14b3b3f1f20c294f3be85e2d3d9240b06a41745e39d6fe348b85.scope - libcontainer container 393f87be7afc14b3b3f1f20c294f3be85e2d3d9240b06a41745e39d6fe348b85. Mar 17 17:26:04.833728 containerd[1939]: time="2025-03-17T17:26:04.832795454Z" level=info msg="StartContainer for \"393f87be7afc14b3b3f1f20c294f3be85e2d3d9240b06a41745e39d6fe348b85\" returns successfully" Mar 17 17:26:04.870213 kubelet[2392]: E0317 17:26:04.870163 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:05.328072 kubelet[2392]: I0317 17:26:05.327953 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-l46w9" podStartSLOduration=8.66195164 podStartE2EDuration="15.327931633s" podCreationTimestamp="2025-03-17 17:25:50 +0000 UTC" firstStartedPulling="2025-03-17 17:25:58.02499382 +0000 UTC m=+22.147675251" lastFinishedPulling="2025-03-17 17:26:04.690973765 +0000 UTC m=+28.813655244" observedRunningTime="2025-03-17 17:26:05.326841925 +0000 UTC m=+29.449523392" watchObservedRunningTime="2025-03-17 17:26:05.327931633 +0000 UTC m=+29.450613064" Mar 17 17:26:05.870409 kubelet[2392]: E0317 17:26:05.870329 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:06.446045 containerd[1939]: time="2025-03-17T17:26:06.445608878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:06.447119 containerd[1939]: time="2025-03-17T17:26:06.447020462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 17 17:26:06.448315 containerd[1939]: time="2025-03-17T17:26:06.448241594Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:06.451933 containerd[1939]: time="2025-03-17T17:26:06.451863974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:06.453703 containerd[1939]: time="2025-03-17T17:26:06.453490514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.762130781s" Mar 17 17:26:06.453703 containerd[1939]: time="2025-03-17T17:26:06.453550058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 17 17:26:06.457650 containerd[1939]: time="2025-03-17T17:26:06.457594550Z" level=info msg="CreateContainer within sandbox \"c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:26:06.485704 containerd[1939]: time="2025-03-17T17:26:06.485531726Z" level=info msg="CreateContainer within sandbox \"c1ee79fb9c67440bcc06d59f44008c61a4f3691a672085eb01517dd0ca28d79b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4ebe4575ab2fb35fe37220fc72847f8442adb938949fc8c661c92c0adaaced8b\"" Mar 17 17:26:06.486483 containerd[1939]: time="2025-03-17T17:26:06.486354998Z" level=info msg="StartContainer for \"4ebe4575ab2fb35fe37220fc72847f8442adb938949fc8c661c92c0adaaced8b\"" Mar 17 17:26:06.539540 systemd[1]: run-containerd-runc-k8s.io-4ebe4575ab2fb35fe37220fc72847f8442adb938949fc8c661c92c0adaaced8b-runc.uChw4p.mount: Deactivated successfully. Mar 17 17:26:06.549785 systemd[1]: Started cri-containerd-4ebe4575ab2fb35fe37220fc72847f8442adb938949fc8c661c92c0adaaced8b.scope - libcontainer container 4ebe4575ab2fb35fe37220fc72847f8442adb938949fc8c661c92c0adaaced8b. Mar 17 17:26:06.604179 containerd[1939]: time="2025-03-17T17:26:06.604085319Z" level=info msg="StartContainer for \"4ebe4575ab2fb35fe37220fc72847f8442adb938949fc8c661c92c0adaaced8b\" returns successfully" Mar 17 17:26:06.871107 kubelet[2392]: E0317 17:26:06.870576 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:06.993592 kubelet[2392]: I0317 17:26:06.993515 2392 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:26:06.993592 kubelet[2392]: I0317 17:26:06.993562 2392 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:26:07.354403 kubelet[2392]: I0317 17:26:07.354185 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xjw9g" podStartSLOduration=21.815887497 podStartE2EDuration="30.354166791s" podCreationTimestamp="2025-03-17 17:25:37 +0000 UTC" firstStartedPulling="2025-03-17 17:25:57.917055968 +0000 UTC m=+22.039737399" lastFinishedPulling="2025-03-17 17:26:06.455335262 +0000 UTC m=+30.578016693" observedRunningTime="2025-03-17 17:26:07.353851479 +0000 UTC m=+31.476532922" watchObservedRunningTime="2025-03-17 17:26:07.354166791 +0000 UTC m=+31.476848234" Mar 17 17:26:07.871162 kubelet[2392]: E0317 17:26:07.871073 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:08.872158 kubelet[2392]: E0317 17:26:08.872076 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:09.873005 kubelet[2392]: E0317 17:26:09.872917 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:10.873348 kubelet[2392]: E0317 17:26:10.873260 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:11.873617 kubelet[2392]: E0317 17:26:11.873508 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:12.874237 kubelet[2392]: E0317 17:26:12.874162 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:13.142633 update_engine[1913]: I20250317 17:26:13.141908 1913 update_attempter.cc:509] Updating boot flags... Mar 17 17:26:13.230561 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3911) Mar 17 17:26:13.518665 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3911) Mar 17 17:26:13.711586 kubelet[2392]: I0317 17:26:13.709646 2392 topology_manager.go:215] "Topology Admit Handler" podUID="9834b48e-7d08-43a8-8ac5-1d8d738a971f" podNamespace="default" podName="nfs-server-provisioner-0" Mar 17 17:26:13.757213 systemd[1]: Created slice kubepods-besteffort-pod9834b48e_7d08_43a8_8ac5_1d8d738a971f.slice - libcontainer container kubepods-besteffort-pod9834b48e_7d08_43a8_8ac5_1d8d738a971f.slice. Mar 17 17:26:13.771711 kubelet[2392]: I0317 17:26:13.771633 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9834b48e-7d08-43a8-8ac5-1d8d738a971f-data\") pod \"nfs-server-provisioner-0\" (UID: \"9834b48e-7d08-43a8-8ac5-1d8d738a971f\") " pod="default/nfs-server-provisioner-0" Mar 17 17:26:13.773089 kubelet[2392]: I0317 17:26:13.772333 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfndh\" (UniqueName: \"kubernetes.io/projected/9834b48e-7d08-43a8-8ac5-1d8d738a971f-kube-api-access-tfndh\") pod \"nfs-server-provisioner-0\" (UID: \"9834b48e-7d08-43a8-8ac5-1d8d738a971f\") " pod="default/nfs-server-provisioner-0" Mar 17 17:26:13.830581 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3911) Mar 17 17:26:13.875174 kubelet[2392]: E0317 17:26:13.875126 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:14.064983 containerd[1939]: time="2025-03-17T17:26:14.064257332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9834b48e-7d08-43a8-8ac5-1d8d738a971f,Namespace:default,Attempt:0,}" Mar 17 17:26:14.289042 (udev-worker)[3902]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:14.293228 systemd-networkd[1849]: cali60e51b789ff: Link UP Mar 17 17:26:14.294364 systemd-networkd[1849]: cali60e51b789ff: Gained carrier Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.152 [INFO][4171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.142-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 9834b48e-7d08-43a8-8ac5-1d8d738a971f 1219 0 2025-03-17 17:26:13 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.28.142 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.152 [INFO][4171] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.203 [INFO][4184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" HandleID="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Workload="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.233 [INFO][4184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" HandleID="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Workload="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316ae0), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.142", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:26:14.203733489 +0000 UTC"}, Hostname:"172.31.28.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.233 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.233 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.233 [INFO][4184] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.142' Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.237 [INFO][4184] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.244 [INFO][4184] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.251 [INFO][4184] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.254 [INFO][4184] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.258 [INFO][4184] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.258 [INFO][4184] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.261 [INFO][4184] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.270 [INFO][4184] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.281 [INFO][4184] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.131/26] block=192.168.86.128/26 handle="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.281 [INFO][4184] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.131/26] handle="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" host="172.31.28.142" Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.282 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:26:14.319527 containerd[1939]: 2025-03-17 17:26:14.282 [INFO][4184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.131/26] IPv6=[] ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" HandleID="k8s-pod-network.6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Workload="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.321661 containerd[1939]: 2025-03-17 17:26:14.285 [INFO][4171] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9834b48e-7d08-43a8-8ac5-1d8d738a971f", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:26:14.321661 containerd[1939]: 2025-03-17 17:26:14.285 [INFO][4171] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.131/32] ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.321661 containerd[1939]: 2025-03-17 17:26:14.285 [INFO][4171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.321661 containerd[1939]: 2025-03-17 17:26:14.295 [INFO][4171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.322378 containerd[1939]: 2025-03-17 17:26:14.296 [INFO][4171] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9834b48e-7d08-43a8-8ac5-1d8d738a971f", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"92:02:f3:28:b7:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:26:14.322378 containerd[1939]: 2025-03-17 17:26:14.316 [INFO][4171] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.142-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:26:14.362295 containerd[1939]: time="2025-03-17T17:26:14.361867341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:14.362295 containerd[1939]: time="2025-03-17T17:26:14.361986597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:14.362295 containerd[1939]: time="2025-03-17T17:26:14.362023833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:14.362295 containerd[1939]: time="2025-03-17T17:26:14.362203305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:14.406219 systemd[1]: run-containerd-runc-k8s.io-6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f-runc.U9aLxT.mount: Deactivated successfully. Mar 17 17:26:14.416782 systemd[1]: Started cri-containerd-6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f.scope - libcontainer container 6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f. Mar 17 17:26:14.479037 containerd[1939]: time="2025-03-17T17:26:14.478946170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9834b48e-7d08-43a8-8ac5-1d8d738a971f,Namespace:default,Attempt:0,} returns sandbox id \"6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f\"" Mar 17 17:26:14.486368 containerd[1939]: time="2025-03-17T17:26:14.486296122Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:26:14.876938 kubelet[2392]: E0317 17:26:14.876871 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:15.352386 systemd-networkd[1849]: cali60e51b789ff: Gained IPv6LL Mar 17 17:26:15.877882 kubelet[2392]: E0317 17:26:15.877828 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:16.850690 kubelet[2392]: E0317 17:26:16.850578 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:16.880109 kubelet[2392]: E0317 17:26:16.880005 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:17.139682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699107719.mount: Deactivated successfully. Mar 17 17:26:17.880701 kubelet[2392]: E0317 17:26:17.880605 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:18.310920 ntpd[1908]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Mar 17 17:26:18.311628 ntpd[1908]: 17 Mar 17:26:18 ntpd[1908]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Mar 17 17:26:18.881699 kubelet[2392]: E0317 17:26:18.881655 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:19.882297 kubelet[2392]: E0317 17:26:19.882214 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:20.011511 containerd[1939]: time="2025-03-17T17:26:20.010200098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:20.013716 containerd[1939]: time="2025-03-17T17:26:20.013126790Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Mar 17 17:26:20.015478 containerd[1939]: time="2025-03-17T17:26:20.015305906Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:20.021789 containerd[1939]: time="2025-03-17T17:26:20.021655934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:20.024969 containerd[1939]: time="2025-03-17T17:26:20.024419978Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.538051448s" Mar 17 17:26:20.024969 containerd[1939]: time="2025-03-17T17:26:20.024553022Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 17:26:20.030803 containerd[1939]: time="2025-03-17T17:26:20.030488966Z" level=info msg="CreateContainer within sandbox \"6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:26:20.056377 containerd[1939]: time="2025-03-17T17:26:20.056174894Z" level=info msg="CreateContainer within sandbox \"6b264b5b0bf79002002fb5e349f469d205e51f16e4ad9e016087ee3e69a7800f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e9b7b4f6dfa8e23778f96068f2667e594a0edc4c1a337f1daba0ae8059c92b56\"" Mar 17 17:26:20.057513 containerd[1939]: time="2025-03-17T17:26:20.057397130Z" level=info msg="StartContainer for \"e9b7b4f6dfa8e23778f96068f2667e594a0edc4c1a337f1daba0ae8059c92b56\"" Mar 17 17:26:20.122853 systemd[1]: Started cri-containerd-e9b7b4f6dfa8e23778f96068f2667e594a0edc4c1a337f1daba0ae8059c92b56.scope - libcontainer container e9b7b4f6dfa8e23778f96068f2667e594a0edc4c1a337f1daba0ae8059c92b56. Mar 17 17:26:20.178312 containerd[1939]: time="2025-03-17T17:26:20.178212866Z" level=info msg="StartContainer for \"e9b7b4f6dfa8e23778f96068f2667e594a0edc4c1a337f1daba0ae8059c92b56\" returns successfully" Mar 17 17:26:20.417216 kubelet[2392]: I0317 17:26:20.417118 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.874515156 podStartE2EDuration="7.417075316s" podCreationTimestamp="2025-03-17 17:26:13 +0000 UTC" firstStartedPulling="2025-03-17 17:26:14.48461785 +0000 UTC m=+38.607299281" lastFinishedPulling="2025-03-17 17:26:20.027178022 +0000 UTC m=+44.149859441" observedRunningTime="2025-03-17 17:26:20.414345256 +0000 UTC m=+44.537026699" watchObservedRunningTime="2025-03-17 17:26:20.417075316 +0000 UTC m=+44.539756759" Mar 17 17:26:20.883194 kubelet[2392]: E0317 17:26:20.883114 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:21.883315 kubelet[2392]: E0317 17:26:21.883249 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:22.883857 kubelet[2392]: E0317 17:26:22.883794 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:23.884910 kubelet[2392]: E0317 17:26:23.884824 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:24.886019 kubelet[2392]: E0317 17:26:24.885953 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:25.886639 kubelet[2392]: E0317 17:26:25.886559 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:26.887509 kubelet[2392]: E0317 17:26:26.887417 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:27.887880 kubelet[2392]: E0317 17:26:27.887803 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:28.888611 kubelet[2392]: E0317 17:26:28.888546 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:29.889026 kubelet[2392]: E0317 17:26:29.888954 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:30.889805 kubelet[2392]: E0317 17:26:30.889730 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:31.890961 kubelet[2392]: E0317 17:26:31.890890 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:32.891599 kubelet[2392]: E0317 17:26:32.891511 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:33.891965 kubelet[2392]: E0317 17:26:33.891887 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:34.892834 kubelet[2392]: E0317 17:26:34.892761 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:35.893805 kubelet[2392]: E0317 17:26:35.893730 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:36.850967 kubelet[2392]: E0317 17:26:36.850902 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:36.876523 containerd[1939]: time="2025-03-17T17:26:36.876412749Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:26:36.877424 containerd[1939]: time="2025-03-17T17:26:36.876609033Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:26:36.877424 containerd[1939]: time="2025-03-17T17:26:36.876631989Z" level=info msg="StopPodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:26:36.877758 containerd[1939]: time="2025-03-17T17:26:36.877707153Z" level=info msg="RemovePodSandbox for \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:26:36.877843 containerd[1939]: time="2025-03-17T17:26:36.877761981Z" level=info msg="Forcibly stopping sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\"" Mar 17 17:26:36.878109 containerd[1939]: time="2025-03-17T17:26:36.877891593Z" level=info msg="TearDown network for sandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" successfully" Mar 17 17:26:36.883414 containerd[1939]: time="2025-03-17T17:26:36.883333677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.883934 containerd[1939]: time="2025-03-17T17:26:36.883415301Z" level=info msg="RemovePodSandbox \"6dfbd33e5ebd1620b97b8e1e93e63f2905c1ee88dff7e0dffd37a9db3bbb231a\" returns successfully" Mar 17 17:26:36.884533 containerd[1939]: time="2025-03-17T17:26:36.884211981Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:26:36.884533 containerd[1939]: time="2025-03-17T17:26:36.884386677Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:26:36.884533 containerd[1939]: time="2025-03-17T17:26:36.884408349Z" level=info msg="StopPodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:26:36.885273 containerd[1939]: time="2025-03-17T17:26:36.885239241Z" level=info msg="RemovePodSandbox for \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:26:36.885549 containerd[1939]: time="2025-03-17T17:26:36.885408561Z" level=info msg="Forcibly stopping sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\"" Mar 17 17:26:36.885810 containerd[1939]: time="2025-03-17T17:26:36.885692697Z" level=info msg="TearDown network for sandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" successfully" Mar 17 17:26:36.889678 containerd[1939]: time="2025-03-17T17:26:36.889624197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.890071 containerd[1939]: time="2025-03-17T17:26:36.889894245Z" level=info msg="RemovePodSandbox \"a61bf34469f6816ea930c4f93bfa811eff510f0eb70e3f251758002870f82a8c\" returns successfully" Mar 17 17:26:36.890945 containerd[1939]: time="2025-03-17T17:26:36.890671533Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:26:36.890945 containerd[1939]: time="2025-03-17T17:26:36.890827953Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:26:36.890945 containerd[1939]: time="2025-03-17T17:26:36.890851701Z" level=info msg="StopPodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:26:36.891809 containerd[1939]: time="2025-03-17T17:26:36.891684777Z" level=info msg="RemovePodSandbox for \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:26:36.891809 containerd[1939]: time="2025-03-17T17:26:36.891734373Z" level=info msg="Forcibly stopping sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\"" Mar 17 17:26:36.892021 containerd[1939]: time="2025-03-17T17:26:36.891867993Z" level=info msg="TearDown network for sandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" successfully" Mar 17 17:26:36.894082 kubelet[2392]: E0317 17:26:36.894021 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:36.896136 containerd[1939]: time="2025-03-17T17:26:36.896058009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.896366 containerd[1939]: time="2025-03-17T17:26:36.896141313Z" level=info msg="RemovePodSandbox \"bcab78d62fab70aa38fcae604ebb313cf993011defc09143a9d2cf55f85a8241\" returns successfully" Mar 17 17:26:36.897023 containerd[1939]: time="2025-03-17T17:26:36.896950305Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:26:36.897153 containerd[1939]: time="2025-03-17T17:26:36.897106185Z" level=info msg="TearDown network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" successfully" Mar 17 17:26:36.897153 containerd[1939]: time="2025-03-17T17:26:36.897128013Z" level=info msg="StopPodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" returns successfully" Mar 17 17:26:36.897691 containerd[1939]: time="2025-03-17T17:26:36.897644889Z" level=info msg="RemovePodSandbox for \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:26:36.897793 containerd[1939]: time="2025-03-17T17:26:36.897694581Z" level=info msg="Forcibly stopping sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\"" Mar 17 17:26:36.897856 containerd[1939]: time="2025-03-17T17:26:36.897819765Z" level=info msg="TearDown network for sandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" successfully" Mar 17 17:26:36.901473 containerd[1939]: time="2025-03-17T17:26:36.901147173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.901473 containerd[1939]: time="2025-03-17T17:26:36.901245837Z" level=info msg="RemovePodSandbox \"b8e84e949e47f81a3785fd00b2d48afe1a992f818a58444213bf87467e6f4144\" returns successfully" Mar 17 17:26:36.901924 containerd[1939]: time="2025-03-17T17:26:36.901878669Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" Mar 17 17:26:36.904480 containerd[1939]: time="2025-03-17T17:26:36.902046489Z" level=info msg="TearDown network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" successfully" Mar 17 17:26:36.904480 containerd[1939]: time="2025-03-17T17:26:36.902081049Z" level=info msg="StopPodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" returns successfully" Mar 17 17:26:36.904480 containerd[1939]: time="2025-03-17T17:26:36.903110505Z" level=info msg="RemovePodSandbox for \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" Mar 17 17:26:36.904480 containerd[1939]: time="2025-03-17T17:26:36.903166389Z" level=info msg="Forcibly stopping sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\"" Mar 17 17:26:36.904480 containerd[1939]: time="2025-03-17T17:26:36.903316689Z" level=info msg="TearDown network for sandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" successfully" Mar 17 17:26:36.907148 containerd[1939]: time="2025-03-17T17:26:36.907056285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.907325 containerd[1939]: time="2025-03-17T17:26:36.907212621Z" level=info msg="RemovePodSandbox \"dc3b60b3e7435844e6aada0a26dda60b46253ef2218a4bf2719bc41e6de6cc55\" returns successfully" Mar 17 17:26:36.908149 containerd[1939]: time="2025-03-17T17:26:36.908091261Z" level=info msg="StopPodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\"" Mar 17 17:26:36.908313 containerd[1939]: time="2025-03-17T17:26:36.908268081Z" level=info msg="TearDown network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" successfully" Mar 17 17:26:36.908313 containerd[1939]: time="2025-03-17T17:26:36.908302233Z" level=info msg="StopPodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" returns successfully" Mar 17 17:26:36.909225 containerd[1939]: time="2025-03-17T17:26:36.909071253Z" level=info msg="RemovePodSandbox for \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\"" Mar 17 17:26:36.909225 containerd[1939]: time="2025-03-17T17:26:36.909122757Z" level=info msg="Forcibly stopping sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\"" Mar 17 17:26:36.909564 containerd[1939]: time="2025-03-17T17:26:36.909252273Z" level=info msg="TearDown network for sandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" successfully" Mar 17 17:26:36.912590 containerd[1939]: time="2025-03-17T17:26:36.912502461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.912716 containerd[1939]: time="2025-03-17T17:26:36.912653157Z" level=info msg="RemovePodSandbox \"f6be87d7828ba20ec27a32c55531f722b2b177076b7db3e210d987a3bb93dd6e\" returns successfully" Mar 17 17:26:36.913682 containerd[1939]: time="2025-03-17T17:26:36.913502181Z" level=info msg="StopPodSandbox for \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\"" Mar 17 17:26:36.913682 containerd[1939]: time="2025-03-17T17:26:36.913675221Z" level=info msg="TearDown network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\" successfully" Mar 17 17:26:36.913910 containerd[1939]: time="2025-03-17T17:26:36.913700589Z" level=info msg="StopPodSandbox for \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\" returns successfully" Mar 17 17:26:36.914795 containerd[1939]: time="2025-03-17T17:26:36.914497845Z" level=info msg="RemovePodSandbox for \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\"" Mar 17 17:26:36.914795 containerd[1939]: time="2025-03-17T17:26:36.914541561Z" level=info msg="Forcibly stopping sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\"" Mar 17 17:26:36.914795 containerd[1939]: time="2025-03-17T17:26:36.914674833Z" level=info msg="TearDown network for sandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\" successfully" Mar 17 17:26:36.918139 containerd[1939]: time="2025-03-17T17:26:36.918032386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.918139 containerd[1939]: time="2025-03-17T17:26:36.918113698Z" level=info msg="RemovePodSandbox \"eb7791f615f9837f1a0b1c53bdec4c7414fb307a2808a2397175dacd54633188\" returns successfully" Mar 17 17:26:36.918969 containerd[1939]: time="2025-03-17T17:26:36.918700906Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:26:36.918969 containerd[1939]: time="2025-03-17T17:26:36.918851854Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:26:36.918969 containerd[1939]: time="2025-03-17T17:26:36.918874750Z" level=info msg="StopPodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:26:36.919720 containerd[1939]: time="2025-03-17T17:26:36.919659922Z" level=info msg="RemovePodSandbox for \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:26:36.919917 containerd[1939]: time="2025-03-17T17:26:36.919825678Z" level=info msg="Forcibly stopping sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\"" Mar 17 17:26:36.920230 containerd[1939]: time="2025-03-17T17:26:36.920117026Z" level=info msg="TearDown network for sandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" successfully" Mar 17 17:26:36.924091 containerd[1939]: time="2025-03-17T17:26:36.924042514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.924456 containerd[1939]: time="2025-03-17T17:26:36.924284626Z" level=info msg="RemovePodSandbox \"e792085b10e3adda4994285427aa2e38fb4e9dbd9a3f9cc71e2d479b34a16508\" returns successfully" Mar 17 17:26:36.925113 containerd[1939]: time="2025-03-17T17:26:36.924857098Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:26:36.925449 containerd[1939]: time="2025-03-17T17:26:36.925284550Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:26:36.925449 containerd[1939]: time="2025-03-17T17:26:36.925318294Z" level=info msg="StopPodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:26:36.925908 containerd[1939]: time="2025-03-17T17:26:36.925851094Z" level=info msg="RemovePodSandbox for \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:26:36.925976 containerd[1939]: time="2025-03-17T17:26:36.925914622Z" level=info msg="Forcibly stopping sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\"" Mar 17 17:26:36.926075 containerd[1939]: time="2025-03-17T17:26:36.926044486Z" level=info msg="TearDown network for sandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" successfully" Mar 17 17:26:36.929359 containerd[1939]: time="2025-03-17T17:26:36.929292106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.929507 containerd[1939]: time="2025-03-17T17:26:36.929368210Z" level=info msg="RemovePodSandbox \"a9e5ac01081ecc9135a04d867b5a21cdef1f0089d96471e5c080f8e305412fae\" returns successfully" Mar 17 17:26:36.930492 containerd[1939]: time="2025-03-17T17:26:36.930234766Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:26:36.930492 containerd[1939]: time="2025-03-17T17:26:36.930384898Z" level=info msg="TearDown network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" successfully" Mar 17 17:26:36.930492 containerd[1939]: time="2025-03-17T17:26:36.930405970Z" level=info msg="StopPodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" returns successfully" Mar 17 17:26:36.931208 containerd[1939]: time="2025-03-17T17:26:36.931149970Z" level=info msg="RemovePodSandbox for \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:26:36.931208 containerd[1939]: time="2025-03-17T17:26:36.931201402Z" level=info msg="Forcibly stopping sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\"" Mar 17 17:26:36.931382 containerd[1939]: time="2025-03-17T17:26:36.931322314Z" level=info msg="TearDown network for sandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" successfully" Mar 17 17:26:36.934945 containerd[1939]: time="2025-03-17T17:26:36.934849834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.935075 containerd[1939]: time="2025-03-17T17:26:36.934996558Z" level=info msg="RemovePodSandbox \"7eeb429b01603c236fcb26261e64bd3d67b005f1eb8eb3cafda7cb538634bcd0\" returns successfully" Mar 17 17:26:36.936088 containerd[1939]: time="2025-03-17T17:26:36.936034738Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" Mar 17 17:26:36.937898 containerd[1939]: time="2025-03-17T17:26:36.936199342Z" level=info msg="TearDown network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" successfully" Mar 17 17:26:36.937898 containerd[1939]: time="2025-03-17T17:26:36.936231274Z" level=info msg="StopPodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" returns successfully" Mar 17 17:26:36.937898 containerd[1939]: time="2025-03-17T17:26:36.936788998Z" level=info msg="RemovePodSandbox for \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" Mar 17 17:26:36.937898 containerd[1939]: time="2025-03-17T17:26:36.936833614Z" level=info msg="Forcibly stopping sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\"" Mar 17 17:26:36.937898 containerd[1939]: time="2025-03-17T17:26:36.936953206Z" level=info msg="TearDown network for sandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" successfully" Mar 17 17:26:36.940355 containerd[1939]: time="2025-03-17T17:26:36.940303222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.940583 containerd[1939]: time="2025-03-17T17:26:36.940552054Z" level=info msg="RemovePodSandbox \"afa2d8961a8db4ed2b33d9ec7283d2b22f4e63a77bf52f88836914248b852c36\" returns successfully" Mar 17 17:26:36.941482 containerd[1939]: time="2025-03-17T17:26:36.941413018Z" level=info msg="StopPodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\"" Mar 17 17:26:36.941771 containerd[1939]: time="2025-03-17T17:26:36.941730946Z" level=info msg="TearDown network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" successfully" Mar 17 17:26:36.941878 containerd[1939]: time="2025-03-17T17:26:36.941851966Z" level=info msg="StopPodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" returns successfully" Mar 17 17:26:36.942605 containerd[1939]: time="2025-03-17T17:26:36.942568030Z" level=info msg="RemovePodSandbox for \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\"" Mar 17 17:26:36.942823 containerd[1939]: time="2025-03-17T17:26:36.942767878Z" level=info msg="Forcibly stopping sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\"" Mar 17 17:26:36.943072 containerd[1939]: time="2025-03-17T17:26:36.943040770Z" level=info msg="TearDown network for sandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" successfully" Mar 17 17:26:36.953583 containerd[1939]: time="2025-03-17T17:26:36.953529382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.953828 containerd[1939]: time="2025-03-17T17:26:36.953786350Z" level=info msg="RemovePodSandbox \"40f73f266fe451cb1cba75969283701af8d1483078da8f4459a2477be0ac5d14\" returns successfully" Mar 17 17:26:36.954555 containerd[1939]: time="2025-03-17T17:26:36.954511774Z" level=info msg="StopPodSandbox for \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\"" Mar 17 17:26:36.954927 containerd[1939]: time="2025-03-17T17:26:36.954856882Z" level=info msg="TearDown network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\" successfully" Mar 17 17:26:36.955043 containerd[1939]: time="2025-03-17T17:26:36.955015774Z" level=info msg="StopPodSandbox for \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\" returns successfully" Mar 17 17:26:36.955905 containerd[1939]: time="2025-03-17T17:26:36.955866106Z" level=info msg="RemovePodSandbox for \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\"" Mar 17 17:26:36.956297 containerd[1939]: time="2025-03-17T17:26:36.956268358Z" level=info msg="Forcibly stopping sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\"" Mar 17 17:26:36.956571 containerd[1939]: time="2025-03-17T17:26:36.956541238Z" level=info msg="TearDown network for sandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\" successfully" Mar 17 17:26:36.959619 containerd[1939]: time="2025-03-17T17:26:36.959571442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:26:36.960018 containerd[1939]: time="2025-03-17T17:26:36.959768998Z" level=info msg="RemovePodSandbox \"87ea72bd1ff374715a8fa67ab9e7db7b719d8bbb60d96fb0e1d346a28b1a7759\" returns successfully" Mar 17 17:26:37.894563 kubelet[2392]: E0317 17:26:37.894517 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:38.895576 kubelet[2392]: E0317 17:26:38.895495 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:39.896219 kubelet[2392]: E0317 17:26:39.896154 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:40.896596 kubelet[2392]: E0317 17:26:40.896519 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:41.897396 kubelet[2392]: E0317 17:26:41.897327 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:42.897839 kubelet[2392]: E0317 17:26:42.897778 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:43.898971 kubelet[2392]: E0317 17:26:43.898901 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:44.622167 kubelet[2392]: I0317 17:26:44.622075 2392 topology_manager.go:215] "Topology Admit Handler" podUID="43a9711e-a08a-4e81-a1d7-b5fdccba59ee" podNamespace="default" podName="test-pod-1" Mar 17 17:26:44.633931 systemd[1]: Created slice kubepods-besteffort-pod43a9711e_a08a_4e81_a1d7_b5fdccba59ee.slice - libcontainer container kubepods-besteffort-pod43a9711e_a08a_4e81_a1d7_b5fdccba59ee.slice. Mar 17 17:26:44.767390 kubelet[2392]: I0317 17:26:44.767219 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1798cd69-5e38-412b-811b-e9b14f8349b9\" (UniqueName: \"kubernetes.io/nfs/43a9711e-a08a-4e81-a1d7-b5fdccba59ee-pvc-1798cd69-5e38-412b-811b-e9b14f8349b9\") pod \"test-pod-1\" (UID: \"43a9711e-a08a-4e81-a1d7-b5fdccba59ee\") " pod="default/test-pod-1" Mar 17 17:26:44.767390 kubelet[2392]: I0317 17:26:44.767303 2392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ng8\" (UniqueName: \"kubernetes.io/projected/43a9711e-a08a-4e81-a1d7-b5fdccba59ee-kube-api-access-h4ng8\") pod \"test-pod-1\" (UID: \"43a9711e-a08a-4e81-a1d7-b5fdccba59ee\") " pod="default/test-pod-1" Mar 17 17:26:44.900836 kubelet[2392]: E0317 17:26:44.900613 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:44.908574 kernel: FS-Cache: Loaded Mar 17 17:26:44.954301 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:26:44.954456 kernel: RPC: Registered udp transport module. Mar 17 17:26:44.954564 kernel: RPC: Registered tcp transport module. Mar 17 17:26:44.957062 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:26:44.957149 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:26:45.299617 kernel: NFS: Registering the id_resolver key type Mar 17 17:26:45.299726 kernel: Key type id_resolver registered Mar 17 17:26:45.300682 kernel: Key type id_legacy registered Mar 17 17:26:45.342495 nfsidmap[4404]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Mar 17 17:26:45.349185 nfsidmap[4405]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Mar 17 17:26:45.541723 containerd[1939]: time="2025-03-17T17:26:45.541624924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:43a9711e-a08a-4e81-a1d7-b5fdccba59ee,Namespace:default,Attempt:0,}" Mar 17 17:26:45.742074 systemd-networkd[1849]: cali5ec59c6bf6e: Link UP Mar 17 17:26:45.742202 (udev-worker)[4392]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:45.744033 systemd-networkd[1849]: cali5ec59c6bf6e: Gained carrier Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.623 [INFO][4406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.142-k8s-test--pod--1-eth0 default 43a9711e-a08a-4e81-a1d7-b5fdccba59ee 1330 0 2025-03-17 17:26:14 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.28.142 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.623 [INFO][4406] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.673 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" HandleID="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Workload="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.690 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" HandleID="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Workload="172.31.28.142-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002915e0), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.142", "pod":"test-pod-1", "timestamp":"2025-03-17 17:26:45.673072925 +0000 UTC"}, Hostname:"172.31.28.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.690 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.690 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.690 [INFO][4418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.142' Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.693 [INFO][4418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.700 [INFO][4418] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.706 [INFO][4418] ipam/ipam.go 489: Trying affinity for 192.168.86.128/26 host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.710 [INFO][4418] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.713 [INFO][4418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.714 [INFO][4418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.716 [INFO][4418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142 Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.724 [INFO][4418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.733 [INFO][4418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.132/26] block=192.168.86.128/26 handle="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.733 [INFO][4418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.132/26] handle="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" host="172.31.28.142" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.733 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.733 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.132/26] IPv6=[] ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" HandleID="k8s-pod-network.8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Workload="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.765986 containerd[1939]: 2025-03-17 17:26:45.736 [INFO][4406] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"43a9711e-a08a-4e81-a1d7-b5fdccba59ee", ResourceVersion:"1330", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:26:45.768487 containerd[1939]: 2025-03-17 17:26:45.736 [INFO][4406] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.132/32] ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.768487 containerd[1939]: 2025-03-17 17:26:45.736 [INFO][4406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.768487 containerd[1939]: 2025-03-17 17:26:45.740 [INFO][4406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.768487 containerd[1939]: 2025-03-17 17:26:45.741 [INFO][4406] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.142-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"43a9711e-a08a-4e81-a1d7-b5fdccba59ee", ResourceVersion:"1330", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.142", ContainerID:"8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"1a:30:1d:b2:e0:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:26:45.768487 containerd[1939]: 2025-03-17 17:26:45.756 [INFO][4406] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.142-k8s-test--pod--1-eth0" Mar 17 17:26:45.808221 containerd[1939]: time="2025-03-17T17:26:45.807354654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:45.808221 containerd[1939]: time="2025-03-17T17:26:45.807609522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:45.808221 containerd[1939]: time="2025-03-17T17:26:45.807680466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:45.808221 containerd[1939]: time="2025-03-17T17:26:45.808103778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:45.847810 systemd[1]: Started cri-containerd-8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142.scope - libcontainer container 8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142. Mar 17 17:26:45.901019 kubelet[2392]: E0317 17:26:45.900848 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:45.917243 containerd[1939]: time="2025-03-17T17:26:45.917175558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:43a9711e-a08a-4e81-a1d7-b5fdccba59ee,Namespace:default,Attempt:0,} returns sandbox id \"8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142\"" Mar 17 17:26:45.920952 containerd[1939]: time="2025-03-17T17:26:45.920871366Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:26:46.302354 containerd[1939]: time="2025-03-17T17:26:46.302022352Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:46.303053 containerd[1939]: time="2025-03-17T17:26:46.302969488Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:26:46.310227 containerd[1939]: time="2025-03-17T17:26:46.310073428Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 389.12471ms" Mar 17 17:26:46.310227 containerd[1939]: time="2025-03-17T17:26:46.310138120Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:26:46.314064 containerd[1939]: time="2025-03-17T17:26:46.314005648Z" level=info msg="CreateContainer within sandbox \"8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:26:46.334943 containerd[1939]: time="2025-03-17T17:26:46.334862032Z" level=info msg="CreateContainer within sandbox \"8d6ad4e8c7148ec1652b3da1778fe4c8f9816dc807b2c0cb144fa916c9a81142\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0fdc9068725a5922a43b4027c1a96682ef0cf6d021eff90159a65ac2ea4f8486\"" Mar 17 17:26:46.336292 containerd[1939]: time="2025-03-17T17:26:46.335759704Z" level=info msg="StartContainer for \"0fdc9068725a5922a43b4027c1a96682ef0cf6d021eff90159a65ac2ea4f8486\"" Mar 17 17:26:46.403821 systemd[1]: Started cri-containerd-0fdc9068725a5922a43b4027c1a96682ef0cf6d021eff90159a65ac2ea4f8486.scope - libcontainer container 0fdc9068725a5922a43b4027c1a96682ef0cf6d021eff90159a65ac2ea4f8486. Mar 17 17:26:46.457468 containerd[1939]: time="2025-03-17T17:26:46.456985733Z" level=info msg="StartContainer for \"0fdc9068725a5922a43b4027c1a96682ef0cf6d021eff90159a65ac2ea4f8486\" returns successfully" Mar 17 17:26:46.495270 kubelet[2392]: I0317 17:26:46.495129 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.104184279 podStartE2EDuration="32.495104945s" podCreationTimestamp="2025-03-17 17:26:14 +0000 UTC" firstStartedPulling="2025-03-17 17:26:45.92035641 +0000 UTC m=+70.043037829" lastFinishedPulling="2025-03-17 17:26:46.311277064 +0000 UTC m=+70.433958495" observedRunningTime="2025-03-17 17:26:46.494873225 +0000 UTC m=+70.617554656" watchObservedRunningTime="2025-03-17 17:26:46.495104945 +0000 UTC m=+70.617786388" Mar 17 17:26:46.890770 systemd[1]: run-containerd-runc-k8s.io-0fdc9068725a5922a43b4027c1a96682ef0cf6d021eff90159a65ac2ea4f8486-runc.O7GPCY.mount: Deactivated successfully. Mar 17 17:26:46.901418 kubelet[2392]: E0317 17:26:46.901318 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:47.224478 systemd-networkd[1849]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:26:47.901869 kubelet[2392]: E0317 17:26:47.901783 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:48.902057 kubelet[2392]: E0317 17:26:48.901933 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:49.309499 ntpd[1908]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Mar 17 17:26:49.310147 ntpd[1908]: 17 Mar 17:26:49 ntpd[1908]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Mar 17 17:26:49.902873 kubelet[2392]: E0317 17:26:49.902803 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:50.903115 kubelet[2392]: E0317 17:26:50.903029 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:51.904263 kubelet[2392]: E0317 17:26:51.904194 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:52.905107 kubelet[2392]: E0317 17:26:52.905041 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:53.905716 kubelet[2392]: E0317 17:26:53.905644 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:54.906204 kubelet[2392]: E0317 17:26:54.906062 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:55.906572 kubelet[2392]: E0317 17:26:55.906505 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:56.850905 kubelet[2392]: E0317 17:26:56.850838 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:56.907297 kubelet[2392]: E0317 17:26:56.907228 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:57.907895 kubelet[2392]: E0317 17:26:57.907813 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:58.908563 kubelet[2392]: E0317 17:26:58.908485 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:59.909729 kubelet[2392]: E0317 17:26:59.909656 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:00.910751 kubelet[2392]: E0317 17:27:00.910670 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:01.911698 kubelet[2392]: E0317 17:27:01.911633 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:02.911816 kubelet[2392]: E0317 17:27:02.911751 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:03.912227 kubelet[2392]: E0317 17:27:03.912160 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:04.912672 kubelet[2392]: E0317 17:27:04.912607 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:05.913690 kubelet[2392]: E0317 17:27:05.913617 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:06.913988 kubelet[2392]: E0317 17:27:06.913919 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:07.914350 kubelet[2392]: E0317 17:27:07.914278 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:08.710195 kubelet[2392]: E0317 17:27:08.710076 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:27:08.915537 kubelet[2392]: E0317 17:27:08.915455 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:09.916412 kubelet[2392]: E0317 17:27:09.916324 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:10.917412 kubelet[2392]: E0317 17:27:10.917333 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:11.918359 kubelet[2392]: E0317 17:27:11.918273 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:12.919400 kubelet[2392]: E0317 17:27:12.919329 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:13.919580 kubelet[2392]: E0317 17:27:13.919512 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:14.920058 kubelet[2392]: E0317 17:27:14.919987 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:15.920975 kubelet[2392]: E0317 17:27:15.920901 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:16.851195 kubelet[2392]: E0317 17:27:16.851135 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:16.921616 kubelet[2392]: E0317 17:27:16.921544 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:17.922167 kubelet[2392]: E0317 17:27:17.922092 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:18.711017 kubelet[2392]: E0317 17:27:18.710674 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:27:18.922566 kubelet[2392]: E0317 17:27:18.922506 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:19.923686 kubelet[2392]: E0317 17:27:19.923608 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:20.924756 kubelet[2392]: E0317 17:27:20.924691 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:21.925500 kubelet[2392]: E0317 17:27:21.925417 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:22.925897 kubelet[2392]: E0317 17:27:22.925825 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:23.926856 kubelet[2392]: E0317 17:27:23.926781 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:24.927389 kubelet[2392]: E0317 17:27:24.927313 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:25.928321 kubelet[2392]: E0317 17:27:25.928239 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:26.929398 kubelet[2392]: E0317 17:27:26.929332 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:27.930400 kubelet[2392]: E0317 17:27:27.930321 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:28.712324 kubelet[2392]: E0317 17:27:28.712058 2392 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.28.142)" Mar 17 17:27:28.931088 kubelet[2392]: E0317 17:27:28.931024 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:29.931729 kubelet[2392]: E0317 17:27:29.931668 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:30.932149 kubelet[2392]: E0317 17:27:30.932087 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:31.932932 kubelet[2392]: E0317 17:27:31.932865 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:32.933911 kubelet[2392]: E0317 17:27:32.933833 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:33.934619 kubelet[2392]: E0317 17:27:33.934554 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:34.935026 kubelet[2392]: E0317 17:27:34.934953 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:35.935626 kubelet[2392]: E0317 17:27:35.935558 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:36.850989 kubelet[2392]: E0317 17:27:36.850919 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:36.935981 kubelet[2392]: E0317 17:27:36.935920 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:37.936578 kubelet[2392]: E0317 17:27:37.936510 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:38.713338 kubelet[2392]: E0317 17:27:38.713115 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:27:38.748745 kubelet[2392]: E0317 17:27:38.747306 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": read tcp 172.31.28.142:53650->172.31.23.57:6443: read: connection reset by peer" Mar 17 17:27:38.748745 kubelet[2392]: I0317 17:27:38.747387 2392 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 17 17:27:38.749751 kubelet[2392]: E0317 17:27:38.749509 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" interval="200ms" Mar 17 17:27:38.936890 kubelet[2392]: E0317 17:27:38.936823 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:38.950801 kubelet[2392]: E0317 17:27:38.950708 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" interval="400ms" Mar 17 17:27:39.351298 kubelet[2392]: E0317 17:27:39.351222 2392 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.142\": Get \"https://172.31.23.57:6443/api/v1/nodes/172.31.28.142?resourceVersion=0&timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" Mar 17 17:27:39.351541 kubelet[2392]: E0317 17:27:39.351212 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" interval="800ms" Mar 17 17:27:39.351988 kubelet[2392]: E0317 17:27:39.351772 2392 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.142\": Get \"https://172.31.23.57:6443/api/v1/nodes/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" Mar 17 17:27:39.352286 kubelet[2392]: E0317 17:27:39.352232 2392 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.142\": Get \"https://172.31.23.57:6443/api/v1/nodes/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" Mar 17 17:27:39.352963 kubelet[2392]: E0317 17:27:39.352791 2392 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.142\": Get \"https://172.31.23.57:6443/api/v1/nodes/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" Mar 17 17:27:39.353214 kubelet[2392]: E0317 17:27:39.353159 2392 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.142\": Get \"https://172.31.23.57:6443/api/v1/nodes/172.31.28.142?timeout=10s\": dial tcp 172.31.23.57:6443: connect: connection refused" Mar 17 17:27:39.353214 kubelet[2392]: E0317 17:27:39.353195 2392 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Mar 17 17:27:39.938034 kubelet[2392]: E0317 17:27:39.937973 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:40.939119 kubelet[2392]: E0317 17:27:40.939038 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:41.939288 kubelet[2392]: E0317 17:27:41.939203 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:42.940113 kubelet[2392]: E0317 17:27:42.940049 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:43.941042 kubelet[2392]: E0317 17:27:43.940960 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:44.941806 kubelet[2392]: E0317 17:27:44.941740 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:45.942186 kubelet[2392]: E0317 17:27:45.942112 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:46.943407 kubelet[2392]: E0317 17:27:46.943333 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:47.943974 kubelet[2392]: E0317 17:27:47.943897 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:48.944876 kubelet[2392]: E0317 17:27:48.944774 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:49.945086 kubelet[2392]: E0317 17:27:49.945009 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:50.152197 kubelet[2392]: E0317 17:27:50.152100 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.142?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 17 17:27:50.945948 kubelet[2392]: E0317 17:27:50.945864 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:51.946446 kubelet[2392]: E0317 17:27:51.946377 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:52.946684 kubelet[2392]: E0317 17:27:52.946593 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:53.947300 kubelet[2392]: E0317 17:27:53.947225 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"