Feb 13 19:04:49.212434 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:04:49.212486 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:04:49.212512 kernel: KASLR disabled due to lack of seed Feb 13 19:04:49.212530 kernel: efi: EFI v2.7 by EDK II Feb 13 19:04:49.212546 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:04:49.212561 kernel: secureboot: Secure boot disabled Feb 13 19:04:49.212578 kernel: ACPI: Early table checksum verification disabled Feb 13 19:04:49.212594 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:04:49.212609 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:04:49.212624 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:04:49.212644 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:04:49.212661 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:04:49.212677 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:04:49.212693 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:04:49.212712 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:04:49.212733 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:04:49.212751 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:04:49.212768 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:04:49.212785 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:04:49.212802 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:04:49.212818 kernel: printk: bootconsole [uart0] enabled Feb 13 19:04:49.212834 kernel: NUMA: Failed to initialise from firmware Feb 13 19:04:49.212851 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:04:49.212867 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:04:49.212884 kernel: Zone ranges: Feb 13 19:04:49.212900 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:04:49.212921 kernel: DMA32 empty Feb 13 19:04:49.212937 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:04:49.212954 kernel: Movable zone start for each node Feb 13 19:04:49.215064 kernel: Early memory node ranges Feb 13 19:04:49.215086 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:04:49.215103 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:04:49.215120 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:04:49.215136 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:04:49.215152 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:04:49.215168 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:04:49.215184 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:04:49.215200 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:04:49.215225 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:04:49.215243 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:04:49.215266 kernel: psci: probing for conduit method from ACPI. Feb 13 19:04:49.215284 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:04:49.215301 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:04:49.215322 kernel: psci: Trusted OS migration not required Feb 13 19:04:49.215339 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:04:49.215356 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:04:49.215373 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:04:49.215391 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:04:49.215408 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:04:49.215425 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:04:49.215442 kernel: CPU features: detected: Spectre-v2 Feb 13 19:04:49.215459 kernel: CPU features: detected: Spectre-v3a Feb 13 19:04:49.215476 kernel: CPU features: detected: Spectre-BHB Feb 13 19:04:49.215492 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:04:49.215509 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:04:49.215530 kernel: alternatives: applying boot alternatives Feb 13 19:04:49.215550 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:04:49.215569 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:04:49.215586 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:04:49.215603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:04:49.215620 kernel: Fallback order for Node 0: 0 Feb 13 19:04:49.215637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:04:49.215654 kernel: Policy zone: Normal Feb 13 19:04:49.215671 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:04:49.215689 kernel: software IO TLB: area num 2. Feb 13 19:04:49.215710 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:04:49.215728 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:04:49.215745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:04:49.215762 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:04:49.215781 kernel: rcu: RCU event tracing is enabled. Feb 13 19:04:49.215799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:04:49.215816 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:04:49.215833 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:04:49.215850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:04:49.215867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:04:49.215885 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:04:49.215907 kernel: GICv3: 96 SPIs implemented Feb 13 19:04:49.215924 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:04:49.215942 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:04:49.217042 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:04:49.217097 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:04:49.217115 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:04:49.217134 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:04:49.217152 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:04:49.217170 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:04:49.217188 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:04:49.217207 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:04:49.217224 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:04:49.217255 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:04:49.217273 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:04:49.217291 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:04:49.217309 kernel: Console: colour dummy device 80x25 Feb 13 19:04:49.217327 kernel: printk: console [tty1] enabled Feb 13 19:04:49.217345 kernel: ACPI: Core revision 20230628 Feb 13 19:04:49.217364 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:04:49.217382 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:04:49.217400 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:04:49.217418 kernel: landlock: Up and running. Feb 13 19:04:49.217442 kernel: SELinux: Initializing. Feb 13 19:04:49.217461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:04:49.217479 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:04:49.217498 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:04:49.217515 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:04:49.217533 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:04:49.217554 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:04:49.217572 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:04:49.217597 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:04:49.217616 kernel: Remapping and enabling EFI services. Feb 13 19:04:49.217635 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:04:49.217653 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:04:49.217671 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:04:49.217689 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:04:49.217707 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:04:49.217725 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:04:49.217742 kernel: SMP: Total of 2 processors activated. Feb 13 19:04:49.217759 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:04:49.217784 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:04:49.217802 kernel: CPU features: detected: CRC32 instructions Feb 13 19:04:49.217832 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:04:49.217856 kernel: alternatives: applying system-wide alternatives Feb 13 19:04:49.217874 kernel: devtmpfs: initialized Feb 13 19:04:49.217893 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:04:49.217912 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:04:49.217932 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:04:49.219136 kernel: SMBIOS 3.0.0 present. Feb 13 19:04:49.219175 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:04:49.219194 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:04:49.219213 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:04:49.219232 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:04:49.219250 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:04:49.219269 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:04:49.219287 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Feb 13 19:04:49.219310 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:04:49.219328 kernel: cpuidle: using governor menu Feb 13 19:04:49.219347 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:04:49.219365 kernel: ASID allocator initialised with 65536 entries Feb 13 19:04:49.219384 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:04:49.219402 kernel: Serial: AMBA PL011 UART driver Feb 13 19:04:49.219420 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:04:49.219439 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:04:49.219457 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:04:49.219480 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:04:49.219499 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:04:49.219517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:04:49.219535 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:04:49.219554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:04:49.219572 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:04:49.219590 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:04:49.219608 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:04:49.219626 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:04:49.219648 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:04:49.219667 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:04:49.219685 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:04:49.219703 kernel: ACPI: Interpreter enabled Feb 13 19:04:49.219722 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:04:49.219740 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:04:49.219758 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:04:49.222208 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:04:49.222466 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:04:49.222670 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:04:49.222880 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:04:49.223152 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:04:49.223186 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:04:49.223206 kernel: acpiphp: Slot [1] registered Feb 13 19:04:49.223226 kernel: acpiphp: Slot [2] registered Feb 13 19:04:49.223245 kernel: acpiphp: Slot [3] registered Feb 13 19:04:49.223278 kernel: acpiphp: Slot [4] registered Feb 13 19:04:49.223298 kernel: acpiphp: Slot [5] registered Feb 13 19:04:49.223318 kernel: acpiphp: Slot [6] registered Feb 13 19:04:49.223336 kernel: acpiphp: Slot [7] registered Feb 13 19:04:49.223355 kernel: acpiphp: Slot [8] registered Feb 13 19:04:49.223374 kernel: acpiphp: Slot [9] registered Feb 13 19:04:49.223394 kernel: acpiphp: Slot [10] registered Feb 13 19:04:49.223413 kernel: acpiphp: Slot [11] registered Feb 13 19:04:49.223431 kernel: acpiphp: Slot [12] registered Feb 13 19:04:49.223450 kernel: acpiphp: Slot [13] registered Feb 13 19:04:49.223475 kernel: acpiphp: Slot [14] registered Feb 13 19:04:49.223495 kernel: acpiphp: Slot [15] registered Feb 13 19:04:49.223514 kernel: acpiphp: Slot [16] registered Feb 13 19:04:49.223533 kernel: acpiphp: Slot [17] registered Feb 13 19:04:49.223553 kernel: acpiphp: Slot [18] registered Feb 13 19:04:49.223574 kernel: acpiphp: Slot [19] registered Feb 13 19:04:49.223594 kernel: acpiphp: Slot [20] registered Feb 13 19:04:49.223613 kernel: acpiphp: Slot [21] registered Feb 13 19:04:49.223632 kernel: acpiphp: Slot [22] registered Feb 13 19:04:49.223657 kernel: acpiphp: Slot [23] registered Feb 13 19:04:49.223678 kernel: acpiphp: Slot [24] registered Feb 13 19:04:49.223697 kernel: acpiphp: Slot [25] registered Feb 13 19:04:49.223717 kernel: acpiphp: Slot [26] registered Feb 13 19:04:49.223736 kernel: acpiphp: Slot [27] registered Feb 13 19:04:49.223756 kernel: acpiphp: Slot [28] registered Feb 13 19:04:49.223775 kernel: acpiphp: Slot [29] registered Feb 13 19:04:49.223794 kernel: acpiphp: Slot [30] registered Feb 13 19:04:49.223815 kernel: acpiphp: Slot [31] registered Feb 13 19:04:49.223833 kernel: PCI host bridge to bus 0000:00 Feb 13 19:04:49.226229 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:04:49.226448 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:04:49.226637 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:04:49.226824 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:04:49.227128 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:04:49.227377 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:04:49.227615 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:04:49.227868 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:04:49.228840 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:04:49.230222 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:04:49.230469 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:04:49.230676 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:04:49.230878 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:04:49.233212 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:04:49.233446 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:04:49.233665 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:04:49.233909 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:04:49.234223 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:04:49.234450 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:04:49.234669 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:04:49.234879 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:04:49.235136 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:04:49.235327 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:04:49.235353 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:04:49.235373 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:04:49.235392 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:04:49.235411 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:04:49.235429 kernel: iommu: Default domain type: Translated Feb 13 19:04:49.235458 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:04:49.235478 kernel: efivars: Registered efivars operations Feb 13 19:04:49.235496 kernel: vgaarb: loaded Feb 13 19:04:49.235515 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:04:49.235533 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:04:49.235551 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:04:49.235570 kernel: pnp: PnP ACPI init Feb 13 19:04:49.235799 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:04:49.235837 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:04:49.235858 kernel: NET: Registered PF_INET protocol family Feb 13 19:04:49.235878 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:04:49.235897 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:04:49.235917 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:04:49.235936 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:04:49.235955 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:04:49.238045 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:04:49.238066 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:04:49.238096 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:04:49.238115 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:04:49.238134 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:04:49.238152 kernel: kvm [1]: HYP mode not available Feb 13 19:04:49.238170 kernel: Initialise system trusted keyrings Feb 13 19:04:49.238189 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:04:49.238207 kernel: Key type asymmetric registered Feb 13 19:04:49.238225 kernel: Asymmetric key parser 'x509' registered Feb 13 19:04:49.238244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:04:49.238267 kernel: io scheduler mq-deadline registered Feb 13 19:04:49.238286 kernel: io scheduler kyber registered Feb 13 19:04:49.238304 kernel: io scheduler bfq registered Feb 13 19:04:49.238591 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:04:49.238621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:04:49.238640 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:04:49.238659 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:04:49.238677 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:04:49.238703 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:04:49.238723 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:04:49.238941 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:04:49.238996 kernel: printk: console [ttyS0] disabled Feb 13 19:04:49.239018 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:04:49.239037 kernel: printk: console [ttyS0] enabled Feb 13 19:04:49.239056 kernel: printk: bootconsole [uart0] disabled Feb 13 19:04:49.239074 kernel: thunder_xcv, ver 1.0 Feb 13 19:04:49.239092 kernel: thunder_bgx, ver 1.0 Feb 13 19:04:49.239110 kernel: nicpf, ver 1.0 Feb 13 19:04:49.239137 kernel: nicvf, ver 1.0 Feb 13 19:04:49.239396 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:04:49.239597 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:04:48 UTC (1739473488) Feb 13 19:04:49.239623 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:04:49.239642 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:04:49.239661 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:04:49.239679 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:04:49.239707 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:04:49.239725 kernel: Segment Routing with IPv6 Feb 13 19:04:49.239744 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:04:49.239762 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:04:49.239780 kernel: Key type dns_resolver registered Feb 13 19:04:49.239800 kernel: registered taskstats version 1 Feb 13 19:04:49.239818 kernel: Loading compiled-in X.509 certificates Feb 13 19:04:49.239837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:04:49.239855 kernel: Key type .fscrypt registered Feb 13 19:04:49.239873 kernel: Key type fscrypt-provisioning registered Feb 13 19:04:49.239896 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:04:49.239914 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:04:49.239932 kernel: ima: No architecture policies found Feb 13 19:04:49.239950 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:04:49.241824 kernel: clk: Disabling unused clocks Feb 13 19:04:49.241854 kernel: Freeing unused kernel memory: 39680K Feb 13 19:04:49.241872 kernel: Run /init as init process Feb 13 19:04:49.241891 kernel: with arguments: Feb 13 19:04:49.241909 kernel: /init Feb 13 19:04:49.241957 kernel: with environment: Feb 13 19:04:49.242039 kernel: HOME=/ Feb 13 19:04:49.242060 kernel: TERM=linux Feb 13 19:04:49.242079 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:04:49.242104 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:04:49.242133 systemd[1]: Detected virtualization amazon. Feb 13 19:04:49.242154 systemd[1]: Detected architecture arm64. Feb 13 19:04:49.242181 systemd[1]: Running in initrd. Feb 13 19:04:49.242202 systemd[1]: No hostname configured, using default hostname. Feb 13 19:04:49.242222 systemd[1]: Hostname set to . Feb 13 19:04:49.242244 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:04:49.242265 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:04:49.242285 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:04:49.242305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:04:49.242327 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:04:49.242352 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:04:49.242373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:04:49.242393 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:04:49.242417 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:04:49.242438 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:04:49.242458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:04:49.242478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:04:49.242505 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:04:49.242526 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:04:49.242546 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:04:49.242566 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:04:49.242587 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:04:49.242607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:04:49.242627 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:04:49.242647 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:04:49.242667 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:04:49.242693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:04:49.242713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:04:49.242734 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:04:49.242754 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:04:49.242774 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:04:49.242794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:04:49.242814 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:04:49.242835 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:04:49.242859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:04:49.242881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:49.242901 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:04:49.242922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:04:49.242942 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:04:49.243081 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 19:04:49.243138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:04:49.243159 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:04:49.243179 systemd-journald[252]: Journal started Feb 13 19:04:49.243223 systemd-journald[252]: Runtime Journal (/run/log/journal/ec21e0ffddd5207b621a02999735c0ea) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:04:49.201601 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 19:04:49.251659 kernel: Bridge firewalling registered Feb 13 19:04:49.251731 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:04:49.251650 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 19:04:49.254655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:49.262079 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:04:49.281595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:04:49.289261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:04:49.306257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:04:49.313063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:04:49.320284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:04:49.346671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:04:49.362146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:04:49.372364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:49.376931 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:49.389261 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:04:49.403463 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:04:49.431626 dracut-cmdline[289]: dracut-dracut-053 Feb 13 19:04:49.439368 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:04:49.501534 systemd-resolved[292]: Positive Trust Anchors: Feb 13 19:04:49.501594 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:04:49.501657 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:04:49.594004 kernel: SCSI subsystem initialized Feb 13 19:04:49.601999 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:04:49.615005 kernel: iscsi: registered transport (tcp) Feb 13 19:04:49.637417 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:04:49.637491 kernel: QLogic iSCSI HBA Driver Feb 13 19:04:49.718013 kernel: random: crng init done Feb 13 19:04:49.718339 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 13 19:04:49.721957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:04:49.738262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:04:49.752680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:04:49.763268 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:04:49.802265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:04:49.802394 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:04:49.804006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:04:49.869029 kernel: raid6: neonx8 gen() 6650 MB/s Feb 13 19:04:49.886011 kernel: raid6: neonx4 gen() 6484 MB/s Feb 13 19:04:49.903000 kernel: raid6: neonx2 gen() 5405 MB/s Feb 13 19:04:49.919995 kernel: raid6: neonx1 gen() 3927 MB/s Feb 13 19:04:49.936993 kernel: raid6: int64x8 gen() 3780 MB/s Feb 13 19:04:49.953995 kernel: raid6: int64x4 gen() 3704 MB/s Feb 13 19:04:49.970993 kernel: raid6: int64x2 gen() 3575 MB/s Feb 13 19:04:49.988732 kernel: raid6: int64x1 gen() 2768 MB/s Feb 13 19:04:49.988764 kernel: raid6: using algorithm neonx8 gen() 6650 MB/s Feb 13 19:04:50.006757 kernel: raid6: .... xor() 4902 MB/s, rmw enabled Feb 13 19:04:50.006834 kernel: raid6: using neon recovery algorithm Feb 13 19:04:50.015197 kernel: xor: measuring software checksum speed Feb 13 19:04:50.015259 kernel: 8regs : 10657 MB/sec Feb 13 19:04:50.016274 kernel: 32regs : 11942 MB/sec Feb 13 19:04:50.017441 kernel: arm64_neon : 9585 MB/sec Feb 13 19:04:50.017483 kernel: xor: using function: 32regs (11942 MB/sec) Feb 13 19:04:50.102004 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:04:50.122050 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:04:50.130244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:04:50.173395 systemd-udevd[474]: Using default interface naming scheme 'v255'. Feb 13 19:04:50.182980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:04:50.195284 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:04:50.229701 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 19:04:50.286268 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:04:50.295263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:04:50.419043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:04:50.431296 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:04:50.476763 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:04:50.484555 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:04:50.489385 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:04:50.491694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:04:50.503270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:04:50.550540 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:04:50.610100 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:04:50.610169 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:04:50.628887 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:04:50.634123 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:04:50.634467 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:50:70:21:38:03 Feb 13 19:04:50.628200 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:04:50.628452 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:50.631493 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:04:50.634848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:04:50.635188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:50.639844 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:50.649782 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:50.664661 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:04:50.664728 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:04:50.667456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:50.682997 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:04:50.690298 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:04:50.690361 kernel: GPT:9289727 != 16777215 Feb 13 19:04:50.690386 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:04:50.690410 kernel: GPT:9289727 != 16777215 Feb 13 19:04:50.692894 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:04:50.692928 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:04:50.702638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:50.715820 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:04:50.755662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:50.779056 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (533) Feb 13 19:04:50.814792 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (544) Feb 13 19:04:50.851517 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:04:50.909092 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:04:50.925013 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:04:50.927406 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:04:50.943370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:04:50.956288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:04:50.970470 disk-uuid[664]: Primary Header is updated. Feb 13 19:04:50.970470 disk-uuid[664]: Secondary Entries is updated. Feb 13 19:04:50.970470 disk-uuid[664]: Secondary Header is updated. Feb 13 19:04:50.981005 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:04:51.999085 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:04:52.000947 disk-uuid[665]: The operation has completed successfully. Feb 13 19:04:52.183814 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:04:52.184027 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:04:52.230349 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:04:52.242185 sh[926]: Success Feb 13 19:04:52.260118 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:04:52.379376 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:04:52.397172 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:04:52.400709 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:04:52.439735 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:04:52.439798 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:52.439824 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:04:52.442671 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:04:52.442705 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:04:52.458016 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:04:52.469164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:04:52.473115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:04:52.485196 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:04:52.493420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:04:52.524656 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:52.524724 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:52.524761 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:04:52.533730 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:04:52.548628 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:04:52.551470 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:52.562024 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:04:52.574045 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:04:52.684423 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:04:52.695319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:04:52.752314 systemd-networkd[1119]: lo: Link UP Feb 13 19:04:52.752337 systemd-networkd[1119]: lo: Gained carrier Feb 13 19:04:52.754808 systemd-networkd[1119]: Enumeration completed Feb 13 19:04:52.755632 systemd-networkd[1119]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:52.755639 systemd-networkd[1119]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:04:52.757187 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:04:52.759744 systemd-networkd[1119]: eth0: Link UP Feb 13 19:04:52.759752 systemd-networkd[1119]: eth0: Gained carrier Feb 13 19:04:52.759768 systemd-networkd[1119]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:52.762394 systemd[1]: Reached target network.target - Network. Feb 13 19:04:52.793080 systemd-networkd[1119]: eth0: DHCPv4 address 172.31.27.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:04:52.858101 ignition[1028]: Ignition 2.20.0 Feb 13 19:04:52.858592 ignition[1028]: Stage: fetch-offline Feb 13 19:04:52.859074 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:52.859098 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:52.859628 ignition[1028]: Ignition finished successfully Feb 13 19:04:52.870034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:04:52.879278 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:04:52.908772 ignition[1129]: Ignition 2.20.0 Feb 13 19:04:52.908804 ignition[1129]: Stage: fetch Feb 13 19:04:52.910420 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:52.910447 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:52.911060 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:52.932171 ignition[1129]: PUT result: OK Feb 13 19:04:52.935361 ignition[1129]: parsed url from cmdline: "" Feb 13 19:04:52.935505 ignition[1129]: no config URL provided Feb 13 19:04:52.936803 ignition[1129]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:04:52.936830 ignition[1129]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:04:52.936865 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:52.942882 ignition[1129]: PUT result: OK Feb 13 19:04:52.943147 ignition[1129]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:04:52.947015 ignition[1129]: GET result: OK Feb 13 19:04:52.947112 ignition[1129]: parsing config with SHA512: ae57def759dc290786a33c29fae633f8e95e85484fdb566e3903754ba26db2d076653e57b12e544ae77832f2a3f8f1ac246fdd53af8a5bf274ba521560e1cd07 Feb 13 19:04:52.953280 unknown[1129]: fetched base config from "system" Feb 13 19:04:52.953764 ignition[1129]: fetch: fetch complete Feb 13 19:04:52.953296 unknown[1129]: fetched base config from "system" Feb 13 19:04:52.953776 ignition[1129]: fetch: fetch passed Feb 13 19:04:52.953310 unknown[1129]: fetched user config from "aws" Feb 13 19:04:52.953850 ignition[1129]: Ignition finished successfully Feb 13 19:04:52.969645 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:04:52.988360 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:04:53.012481 ignition[1135]: Ignition 2.20.0 Feb 13 19:04:53.012501 ignition[1135]: Stage: kargs Feb 13 19:04:53.013148 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:53.013173 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:53.013319 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:53.018234 ignition[1135]: PUT result: OK Feb 13 19:04:53.025749 ignition[1135]: kargs: kargs passed Feb 13 19:04:53.025899 ignition[1135]: Ignition finished successfully Feb 13 19:04:53.029602 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:04:53.042243 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:04:53.073804 ignition[1141]: Ignition 2.20.0 Feb 13 19:04:53.073824 ignition[1141]: Stage: disks Feb 13 19:04:53.074453 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:53.074478 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:53.074630 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:53.078028 ignition[1141]: PUT result: OK Feb 13 19:04:53.087205 ignition[1141]: disks: disks passed Feb 13 19:04:53.087300 ignition[1141]: Ignition finished successfully Feb 13 19:04:53.091941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:04:53.095342 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:04:53.097678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:04:53.099936 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:04:53.101816 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:04:53.103714 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:04:53.122277 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:04:53.175742 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:04:53.184831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:04:53.194613 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:04:53.290004 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:04:53.290702 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:04:53.293772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:04:53.307626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:04:53.313182 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:04:53.317298 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:04:53.320613 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:04:53.320668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:04:53.343019 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Feb 13 19:04:53.347427 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:04:53.354457 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:53.354494 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:53.354520 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:04:53.361356 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:04:53.369455 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:04:53.373172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:04:53.537685 initrd-setup-root[1193]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:04:53.546994 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:04:53.555777 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:04:53.563653 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:04:53.711785 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:04:53.722215 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:04:53.727976 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:04:53.747195 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:04:53.751198 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:53.795172 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:04:53.799137 ignition[1281]: INFO : Ignition 2.20.0 Feb 13 19:04:53.800953 ignition[1281]: INFO : Stage: mount Feb 13 19:04:53.802723 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:53.805063 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:53.805063 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:53.810347 ignition[1281]: INFO : PUT result: OK Feb 13 19:04:53.814289 ignition[1281]: INFO : mount: mount passed Feb 13 19:04:53.816513 ignition[1281]: INFO : Ignition finished successfully Feb 13 19:04:53.819550 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:04:53.828204 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:04:53.841135 systemd-networkd[1119]: eth0: Gained IPv6LL Feb 13 19:04:53.858304 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:04:53.892004 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1293) Feb 13 19:04:53.895944 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:04:53.896002 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:04:53.896029 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:04:53.902996 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:04:53.906487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:04:53.947246 ignition[1310]: INFO : Ignition 2.20.0 Feb 13 19:04:53.947246 ignition[1310]: INFO : Stage: files Feb 13 19:04:53.950443 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:53.950443 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:53.950443 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:53.957453 ignition[1310]: INFO : PUT result: OK Feb 13 19:04:53.961734 ignition[1310]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:04:53.963928 ignition[1310]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:04:53.963928 ignition[1310]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:04:53.976312 ignition[1310]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:04:53.979065 ignition[1310]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:04:53.981926 unknown[1310]: wrote ssh authorized keys file for user: core Feb 13 19:04:53.985924 ignition[1310]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:04:53.988828 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:04:53.992187 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:04:53.995340 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:04:53.998751 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:04:54.002297 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:04:54.005778 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:04:54.010507 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:54.010507 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:54.010507 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:54.010507 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:04:54.495188 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 19:04:54.865994 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:04:54.865994 ignition[1310]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 19:04:54.872248 ignition[1310]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:04:54.872248 ignition[1310]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:04:54.872248 ignition[1310]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 19:04:54.872248 ignition[1310]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:04:54.872248 ignition[1310]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:04:54.872248 ignition[1310]: INFO : files: files passed Feb 13 19:04:54.890408 ignition[1310]: INFO : Ignition finished successfully Feb 13 19:04:54.894440 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:04:54.909223 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:04:54.914252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:04:54.923949 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:04:54.927138 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:04:54.953656 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:04:54.953656 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:04:54.961570 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:04:54.968062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:04:54.974891 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:04:54.984269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:04:55.034798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:04:55.035236 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:04:55.042106 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:04:55.044201 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:04:55.046217 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:04:55.064331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:04:55.091266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:04:55.108213 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:04:55.130440 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:04:55.134173 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:04:55.137054 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:04:55.138931 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:04:55.139179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:04:55.141940 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:04:55.144138 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:04:55.146081 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:04:55.148315 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:04:55.150668 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:04:55.152938 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:04:55.155071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:04:55.157560 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:04:55.159675 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:04:55.161720 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:04:55.163401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:04:55.163621 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:04:55.166131 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:04:55.168437 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:04:55.170838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:04:55.173113 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:04:55.175498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:04:55.175712 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:04:55.178098 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:04:55.178314 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:04:55.180876 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:04:55.181105 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:04:55.197489 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:04:55.251454 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:04:55.255483 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:04:55.256244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:04:55.266247 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:04:55.266486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:04:55.294835 ignition[1363]: INFO : Ignition 2.20.0 Feb 13 19:04:55.294835 ignition[1363]: INFO : Stage: umount Feb 13 19:04:55.294609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:04:55.306113 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:04:55.306113 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:04:55.306113 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:04:55.296253 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:04:55.315543 ignition[1363]: INFO : PUT result: OK Feb 13 19:04:55.319759 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:04:55.321275 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:04:55.322230 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:04:55.330152 ignition[1363]: INFO : umount: umount passed Feb 13 19:04:55.332093 ignition[1363]: INFO : Ignition finished successfully Feb 13 19:04:55.335314 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:04:55.335901 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:04:55.341808 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:04:55.341925 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:04:55.347114 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:04:55.347202 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:04:55.349134 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:04:55.349211 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:04:55.351159 systemd[1]: Stopped target network.target - Network. Feb 13 19:04:55.352806 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:04:55.352887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:04:55.355119 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:04:55.356755 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:04:55.372877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:04:55.375288 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:04:55.377081 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:04:55.378910 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:04:55.379006 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:04:55.380895 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:04:55.380977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:04:55.382920 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:04:55.383027 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:04:55.384912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:04:55.385009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:04:55.387013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:04:55.387091 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:04:55.389354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:04:55.391519 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:04:55.425029 systemd-networkd[1119]: eth0: DHCPv6 lease lost Feb 13 19:04:55.427498 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:04:55.427713 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:04:55.435711 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:04:55.436809 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:04:55.443224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:04:55.443343 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:04:55.460133 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:04:55.461993 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:04:55.462100 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:04:55.464556 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:04:55.464657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:55.467312 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:04:55.467412 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:04:55.470677 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:04:55.470764 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:04:55.473249 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:04:55.504751 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:04:55.505044 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:04:55.517770 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:04:55.520030 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:04:55.523957 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:04:55.524396 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:04:55.528733 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:04:55.528806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:04:55.531218 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:04:55.531312 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:04:55.534929 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:04:55.535033 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:04:55.537105 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:04:55.537181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:04:55.566377 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:04:55.568597 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:04:55.568708 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:04:55.577388 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:04:55.577485 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:04:55.579759 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:04:55.579833 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:04:55.582397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:04:55.582484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:55.609316 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:04:55.609699 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:04:55.616332 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:04:55.634344 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:04:55.650624 systemd[1]: Switching root. Feb 13 19:04:55.687043 systemd-journald[252]: Journal stopped Feb 13 19:04:57.638618 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 19:04:57.638746 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:04:57.638788 kernel: SELinux: policy capability open_perms=1 Feb 13 19:04:57.638822 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:04:57.638852 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:04:57.638881 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:04:57.638911 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:04:57.638940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:04:57.639192 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:04:57.642998 kernel: audit: type=1403 audit(1739473496.112:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:04:57.643037 systemd[1]: Successfully loaded SELinux policy in 49.931ms. Feb 13 19:04:57.643082 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.727ms. Feb 13 19:04:57.643122 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:04:57.643155 systemd[1]: Detected virtualization amazon. Feb 13 19:04:57.643185 systemd[1]: Detected architecture arm64. Feb 13 19:04:57.643215 systemd[1]: Detected first boot. Feb 13 19:04:57.643246 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:04:57.643276 zram_generator::config[1425]: No configuration found. Feb 13 19:04:57.643309 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:04:57.643340 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:04:57.643375 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:04:57.643409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:04:57.643439 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:04:57.643470 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:04:57.643502 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:04:57.643531 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:04:57.643563 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:04:57.643594 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:04:57.643625 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:04:57.643671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:04:57.643701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:04:57.643730 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:04:57.643758 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:04:57.643790 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:04:57.643819 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:04:57.643850 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:04:57.643881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:04:57.643909 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:04:57.643943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:04:57.644001 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:04:57.644035 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:04:57.644068 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:04:57.644099 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:04:57.644130 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:04:57.644160 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:04:57.644190 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:04:57.644226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:04:57.644254 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:04:57.644285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:04:57.644313 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:04:57.644342 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:04:57.644370 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:04:57.644400 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:04:57.644430 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:04:57.644460 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:04:57.644493 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:04:57.644524 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:04:57.644555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:04:57.644598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:04:57.644627 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:04:57.644659 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:04:57.644690 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:04:57.644719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:04:57.644752 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:04:57.644783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:04:57.644812 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:04:57.644841 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:04:57.644872 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:04:57.644900 kernel: fuse: init (API version 7.39) Feb 13 19:04:57.644927 kernel: ACPI: bus type drm_connector registered Feb 13 19:04:57.644954 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:04:57.646884 kernel: loop: module loaded Feb 13 19:04:57.646929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:04:57.649393 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:04:57.649466 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:04:57.653460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:04:57.653545 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:04:57.653577 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:04:57.653607 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:04:57.653636 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:04:57.653665 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:04:57.653703 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:04:57.653736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:04:57.653766 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:04:57.653797 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:04:57.653826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:04:57.653856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:04:57.653906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:04:57.653939 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:04:57.654046 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:04:57.654089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:04:57.654173 systemd-journald[1530]: Collecting audit messages is disabled. Feb 13 19:04:57.654234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:04:57.654270 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:04:57.654302 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:04:57.654332 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:04:57.654363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:04:57.654395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:04:57.654423 systemd-journald[1530]: Journal started Feb 13 19:04:57.654470 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec21e0ffddd5207b621a02999735c0ea) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:04:57.660927 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:04:57.667503 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:04:57.670585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:04:57.694191 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:04:57.704250 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:04:57.715167 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:04:57.718984 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:04:57.733445 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:04:57.742250 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:04:57.744569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:04:57.753702 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:04:57.756702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:04:57.768491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:04:57.791318 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:04:57.803020 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec21e0ffddd5207b621a02999735c0ea is 105.289ms for 877 entries. Feb 13 19:04:57.803020 systemd-journald[1530]: System Journal (/var/log/journal/ec21e0ffddd5207b621a02999735c0ea) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:04:57.927749 systemd-journald[1530]: Received client request to flush runtime journal. Feb 13 19:04:57.806594 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:04:57.810403 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:04:57.844839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:04:57.862494 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:04:57.865696 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:04:57.871362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:04:57.909596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:57.924981 udevadm[1584]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:04:57.935330 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:04:57.938561 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Feb 13 19:04:57.938585 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Feb 13 19:04:57.949803 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:04:57.960417 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:04:58.022706 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:04:58.037304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:04:58.074050 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Feb 13 19:04:58.074640 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Feb 13 19:04:58.086798 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:04:58.776708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:04:58.792272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:04:58.840737 systemd-udevd[1604]: Using default interface naming scheme 'v255'. Feb 13 19:04:58.884029 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:04:58.896369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:04:58.929360 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:04:59.028269 (udev-worker)[1619]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:59.039954 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:04:59.115429 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:04:59.261097 systemd-networkd[1608]: lo: Link UP Feb 13 19:04:59.261600 systemd-networkd[1608]: lo: Gained carrier Feb 13 19:04:59.264417 systemd-networkd[1608]: Enumeration completed Feb 13 19:04:59.264794 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:04:59.269744 systemd-networkd[1608]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:59.269753 systemd-networkd[1608]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:04:59.273729 systemd-networkd[1608]: eth0: Link UP Feb 13 19:04:59.274161 systemd-networkd[1608]: eth0: Gained carrier Feb 13 19:04:59.274193 systemd-networkd[1608]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:04:59.277036 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:04:59.284080 systemd-networkd[1608]: eth0: DHCPv4 address 172.31.27.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:04:59.319033 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1623) Feb 13 19:04:59.352298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:04:59.527121 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:04:59.544670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:04:59.560855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:04:59.571300 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:04:59.600870 lvm[1733]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:04:59.643530 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:04:59.646182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:04:59.658256 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:04:59.668933 lvm[1736]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:04:59.709617 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:04:59.713169 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:04:59.715579 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:04:59.715622 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:04:59.717919 systemd[1]: Reached target machines.target - Containers. Feb 13 19:04:59.722203 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:04:59.730326 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:04:59.748252 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:04:59.753306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:04:59.757291 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:04:59.766043 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:04:59.776188 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:04:59.783625 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:04:59.816356 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:04:59.836614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:04:59.840063 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:04:59.842830 kernel: loop0: detected capacity change from 0 to 53784 Feb 13 19:04:59.924197 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:04:59.969016 kernel: loop1: detected capacity change from 0 to 116808 Feb 13 19:05:00.013093 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:05:00.068014 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 19:05:00.117339 kernel: loop4: detected capacity change from 0 to 53784 Feb 13 19:05:00.135009 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 19:05:00.167003 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 19:05:00.201011 kernel: loop7: detected capacity change from 0 to 113536 Feb 13 19:05:00.221345 (sd-merge)[1757]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:05:00.222454 (sd-merge)[1757]: Merged extensions into '/usr'. Feb 13 19:05:00.229359 systemd[1]: Reloading requested from client PID 1744 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:05:00.229392 systemd[1]: Reloading... Feb 13 19:05:00.377021 zram_generator::config[1788]: No configuration found. Feb 13 19:05:00.489831 ldconfig[1740]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:05:00.625153 systemd-networkd[1608]: eth0: Gained IPv6LL Feb 13 19:05:00.638762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:05:00.774809 systemd[1]: Reloading finished in 544 ms. Feb 13 19:05:00.805570 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:05:00.809436 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:05:00.812736 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:05:00.829293 systemd[1]: Starting ensure-sysext.service... Feb 13 19:05:00.841388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:05:00.856784 systemd[1]: Reloading requested from client PID 1846 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:05:00.856816 systemd[1]: Reloading... Feb 13 19:05:00.877932 systemd-tmpfiles[1847]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:05:00.878648 systemd-tmpfiles[1847]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:05:00.881105 systemd-tmpfiles[1847]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:05:00.881779 systemd-tmpfiles[1847]: ACLs are not supported, ignoring. Feb 13 19:05:00.882079 systemd-tmpfiles[1847]: ACLs are not supported, ignoring. Feb 13 19:05:00.889234 systemd-tmpfiles[1847]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:05:00.890148 systemd-tmpfiles[1847]: Skipping /boot Feb 13 19:05:00.911913 systemd-tmpfiles[1847]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:05:00.912142 systemd-tmpfiles[1847]: Skipping /boot Feb 13 19:05:01.007011 zram_generator::config[1879]: No configuration found. Feb 13 19:05:01.234138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:05:01.370804 systemd[1]: Reloading finished in 513 ms. Feb 13 19:05:01.393668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:05:01.415272 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:05:01.429127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:05:01.436255 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:05:01.451231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:05:01.459239 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:05:01.493956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:05:01.502460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:05:01.519478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:05:01.538097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:05:01.546562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:05:01.575618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:05:01.576062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:05:01.583873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:05:01.584591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:05:01.593521 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:05:01.616314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:05:01.628629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:05:01.637313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:05:01.642145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:05:01.642676 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:05:01.651640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:05:01.652129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:05:01.657888 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:05:01.659435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:05:01.667868 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:05:01.678884 systemd[1]: Finished ensure-sysext.service. Feb 13 19:05:01.685028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:05:01.685417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:05:01.698533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:05:01.698657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:05:01.703506 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:05:01.718374 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:05:01.727331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:05:01.748746 augenrules[1983]: No rules Feb 13 19:05:01.758633 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:05:01.759213 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:05:01.777125 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:05:01.783404 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:05:01.787602 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:05:01.817480 systemd-resolved[1938]: Positive Trust Anchors: Feb 13 19:05:01.817547 systemd-resolved[1938]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:05:01.817611 systemd-resolved[1938]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:05:01.826852 systemd-resolved[1938]: Defaulting to hostname 'linux'. Feb 13 19:05:01.830219 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:05:01.832812 systemd[1]: Reached target network.target - Network. Feb 13 19:05:01.834775 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:05:01.836866 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:05:01.839163 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:05:01.841394 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:05:01.843797 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:05:01.846663 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:05:01.849152 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:05:01.851836 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:05:01.854482 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:05:01.854541 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:05:01.856311 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:05:01.859707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:05:01.865685 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:05:01.871747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:05:01.878030 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:05:01.880462 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:05:01.882484 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:05:01.884718 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:05:01.884805 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:05:01.884851 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:05:01.888162 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:05:01.906388 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:05:01.916892 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:05:01.937300 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:05:01.946604 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:05:01.957450 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:05:01.968299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:01.968918 jq[2000]: false Feb 13 19:05:01.990376 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:05:02.004468 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:05:02.042342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:05:02.057601 dbus-daemon[1999]: [system] SELinux support is enabled Feb 13 19:05:02.061414 extend-filesystems[2001]: Found loop4 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found loop5 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found loop6 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found loop7 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p1 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p2 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p3 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found usr Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p4 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p6 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p7 Feb 13 19:05:02.061414 extend-filesystems[2001]: Found nvme0n1p9 Feb 13 19:05:02.061414 extend-filesystems[2001]: Checking size of /dev/nvme0n1p9 Feb 13 19:05:02.148984 extend-filesystems[2001]: Resized partition /dev/nvme0n1p9 Feb 13 19:05:02.083680 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:05:02.065241 dbus-daemon[1999]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1608 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:05:02.155245 extend-filesystems[2027]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:05:02.091627 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:05:02.109614 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:05:02.157226 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:05:02.158649 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:05:02.175097 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:05:02.202874 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:05:02.219477 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:05:02.225369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:05:02.245398 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: ---------------------------------------------------- Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: corporation. Support and training for ntp-4 are Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: available at https://www.nwtime.org/support Feb 13 19:05:02.250743 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: ---------------------------------------------------- Feb 13 19:05:02.249476 ntpd[2008]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:05:02.247114 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:05:02.249529 ntpd[2008]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:05:02.258014 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:05:02.249550 ntpd[2008]: ---------------------------------------------------- Feb 13 19:05:02.258610 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:05:02.249571 ntpd[2008]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:05:02.249589 ntpd[2008]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:05:02.249609 ntpd[2008]: corporation. Support and training for ntp-4 are Feb 13 19:05:02.249628 ntpd[2008]: available at https://www.nwtime.org/support Feb 13 19:05:02.249647 ntpd[2008]: ---------------------------------------------------- Feb 13 19:05:02.285859 ntpd[2008]: proto: precision = 0.108 usec (-23) Feb 13 19:05:02.289726 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:05:02.290227 coreos-metadata[1997]: Feb 13 19:05:02.287 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:05:02.294835 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: proto: precision = 0.108 usec (-23) Feb 13 19:05:02.289789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:05:02.296079 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:05:02.296135 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:05:02.303155 ntpd[2008]: basedate set to 2025-02-01 Feb 13 19:05:02.304217 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: basedate set to 2025-02-01 Feb 13 19:05:02.304217 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: gps base set to 2025-02-02 (week 2352) Feb 13 19:05:02.303205 ntpd[2008]: gps base set to 2025-02-02 (week 2352) Feb 13 19:05:02.320568 coreos-metadata[1997]: Feb 13 19:05:02.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:05:02.326484 dbus-daemon[1999]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:05:02.330710 ntpd[2008]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:05:02.335228 coreos-metadata[1997]: Feb 13 19:05:02.334 INFO Fetch successful Feb 13 19:05:02.335228 coreos-metadata[1997]: Feb 13 19:05:02.334 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listen normally on 3 eth0 172.31.27.136:123 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listen normally on 4 lo [::1]:123 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listen normally on 5 eth0 [fe80::450:70ff:fe21:3803%2]:123 Feb 13 19:05:02.335341 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: Listening on routing socket on fd #22 for interface updates Feb 13 19:05:02.335677 jq[2034]: true Feb 13 19:05:02.330825 ntpd[2008]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:05:02.334129 ntpd[2008]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:05:02.334306 ntpd[2008]: Listen normally on 3 eth0 172.31.27.136:123 Feb 13 19:05:02.334387 ntpd[2008]: Listen normally on 4 lo [::1]:123 Feb 13 19:05:02.334464 ntpd[2008]: Listen normally on 5 eth0 [fe80::450:70ff:fe21:3803%2]:123 Feb 13 19:05:02.334538 ntpd[2008]: Listening on routing socket on fd #22 for interface updates Feb 13 19:05:02.337869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:05:02.349264 coreos-metadata[1997]: Feb 13 19:05:02.348 INFO Fetch successful Feb 13 19:05:02.349264 coreos-metadata[1997]: Feb 13 19:05:02.348 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:05:02.352831 coreos-metadata[1997]: Feb 13 19:05:02.352 INFO Fetch successful Feb 13 19:05:02.352831 coreos-metadata[1997]: Feb 13 19:05:02.352 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:05:02.354144 coreos-metadata[1997]: Feb 13 19:05:02.354 INFO Fetch successful Feb 13 19:05:02.354144 coreos-metadata[1997]: Feb 13 19:05:02.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:05:02.358120 coreos-metadata[1997]: Feb 13 19:05:02.357 INFO Fetch failed with 404: resource not found Feb 13 19:05:02.358120 coreos-metadata[1997]: Feb 13 19:05:02.358 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:05:02.358723 ntpd[2008]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:05:02.362206 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:05:02.362206 ntpd[2008]: 13 Feb 19:05:02 ntpd[2008]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:05:02.358789 ntpd[2008]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:05:02.363342 coreos-metadata[1997]: Feb 13 19:05:02.363 INFO Fetch successful Feb 13 19:05:02.363342 coreos-metadata[1997]: Feb 13 19:05:02.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:05:02.364543 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:05:02.365355 (ntainerd)[2050]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:05:02.381601 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:05:02.399096 coreos-metadata[1997]: Feb 13 19:05:02.398 INFO Fetch successful Feb 13 19:05:02.399096 coreos-metadata[1997]: Feb 13 19:05:02.399 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:05:02.402386 coreos-metadata[1997]: Feb 13 19:05:02.402 INFO Fetch successful Feb 13 19:05:02.402386 coreos-metadata[1997]: Feb 13 19:05:02.402 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:05:02.403071 jq[2056]: true Feb 13 19:05:02.414012 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:05:02.412312 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:05:02.460997 update_engine[2030]: I20250213 19:05:02.443439 2030 main.cc:92] Flatcar Update Engine starting Feb 13 19:05:02.461491 coreos-metadata[1997]: Feb 13 19:05:02.419 INFO Fetch successful Feb 13 19:05:02.461491 coreos-metadata[1997]: Feb 13 19:05:02.419 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:05:02.461491 coreos-metadata[1997]: Feb 13 19:05:02.429 INFO Fetch successful Feb 13 19:05:02.473162 extend-filesystems[2027]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:05:02.473162 extend-filesystems[2027]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:05:02.473162 extend-filesystems[2027]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:05:02.467092 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:05:02.496059 update_engine[2030]: I20250213 19:05:02.491858 2030 update_check_scheduler.cc:74] Next update check in 5m26s Feb 13 19:05:02.496158 extend-filesystems[2001]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:05:02.467687 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:05:02.522454 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:05:02.528898 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:05:02.533076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:05:02.629211 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:05:02.635096 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:05:02.661273 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:05:02.666440 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:05:02.754507 bash[2109]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:05:02.759749 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:05:02.780310 systemd[1]: Starting sshkeys.service... Feb 13 19:05:02.836261 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:05:02.843028 amazon-ssm-agent[2103]: Initializing new seelog logger Feb 13 19:05:02.843028 amazon-ssm-agent[2103]: New Seelog Logger Creation Complete Feb 13 19:05:02.843028 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.843028 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.843028 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 processing appconfig overrides Feb 13 19:05:02.843690 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.843690 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.843690 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 processing appconfig overrides Feb 13 19:05:02.843690 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.843690 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.843690 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 processing appconfig overrides Feb 13 19:05:02.845026 amazon-ssm-agent[2103]: 2025-02-13 19:05:02 INFO Proxy environment variables: Feb 13 19:05:02.862284 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.862284 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:05:02.862284 amazon-ssm-agent[2103]: 2025/02/13 19:05:02 processing appconfig overrides Feb 13 19:05:02.892061 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2114) Feb 13 19:05:02.906010 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:05:02.918026 systemd-logind[2028]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:05:02.918079 systemd-logind[2028]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:05:02.920747 systemd-logind[2028]: New seat seat0. Feb 13 19:05:02.924204 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:05:02.942030 locksmithd[2079]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:05:02.951426 amazon-ssm-agent[2103]: 2025-02-13 19:05:02 INFO https_proxy: Feb 13 19:05:03.053145 amazon-ssm-agent[2103]: 2025-02-13 19:05:02 INFO http_proxy: Feb 13 19:05:03.142678 dbus-daemon[1999]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:05:03.142942 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:05:03.152322 dbus-daemon[1999]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2062 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:05:03.164001 amazon-ssm-agent[2103]: 2025-02-13 19:05:02 INFO no_proxy: Feb 13 19:05:03.194409 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:05:03.241313 polkitd[2161]: Started polkitd version 121 Feb 13 19:05:03.259832 amazon-ssm-agent[2103]: 2025-02-13 19:05:02 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:05:03.277096 polkitd[2161]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:05:03.277211 polkitd[2161]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:05:03.292788 polkitd[2161]: Finished loading, compiling and executing 2 rules Feb 13 19:05:03.293943 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:05:03.293640 dbus-daemon[1999]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:05:03.296977 polkitd[2161]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:05:03.350036 coreos-metadata[2121]: Feb 13 19:05:03.348 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:05:03.351689 coreos-metadata[2121]: Feb 13 19:05:03.350 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:05:03.353323 coreos-metadata[2121]: Feb 13 19:05:03.353 INFO Fetch successful Feb 13 19:05:03.356706 coreos-metadata[2121]: Feb 13 19:05:03.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:05:03.358679 coreos-metadata[2121]: Feb 13 19:05:03.358 INFO Fetch successful Feb 13 19:05:03.359023 amazon-ssm-agent[2103]: 2025-02-13 19:05:02 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:05:03.365118 unknown[2121]: wrote ssh authorized keys file for user: core Feb 13 19:05:03.401671 systemd-resolved[1938]: System hostname changed to 'ip-172-31-27-136'. Feb 13 19:05:03.402094 systemd-hostnamed[2062]: Hostname set to (transient) Feb 13 19:05:03.418707 containerd[2050]: time="2025-02-13T19:05:03.418486954Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:05:03.423056 update-ssh-keys[2200]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:05:03.427340 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:05:03.439802 systemd[1]: Finished sshkeys.service. Feb 13 19:05:03.460747 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO Agent will take identity from EC2 Feb 13 19:05:03.560100 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:05:03.640872 containerd[2050]: time="2025-02-13T19:05:03.640659407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.651232 containerd[2050]: time="2025-02-13T19:05:03.651145091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:05:03.651232 containerd[2050]: time="2025-02-13T19:05:03.651219671Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:05:03.651391 containerd[2050]: time="2025-02-13T19:05:03.651257447Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:05:03.651617 containerd[2050]: time="2025-02-13T19:05:03.651573179Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:05:03.651678 containerd[2050]: time="2025-02-13T19:05:03.651619643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.651791 containerd[2050]: time="2025-02-13T19:05:03.651748499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:05:03.651846 containerd[2050]: time="2025-02-13T19:05:03.651786599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.652249 containerd[2050]: time="2025-02-13T19:05:03.652182887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:05:03.652249 containerd[2050]: time="2025-02-13T19:05:03.652226147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.652376 containerd[2050]: time="2025-02-13T19:05:03.652258115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:05:03.652376 containerd[2050]: time="2025-02-13T19:05:03.652282379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.652486 containerd[2050]: time="2025-02-13T19:05:03.652446587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.659392 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:05:03.659704 containerd[2050]: time="2025-02-13T19:05:03.659639351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:05:03.660302 containerd[2050]: time="2025-02-13T19:05:03.659990087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:05:03.660302 containerd[2050]: time="2025-02-13T19:05:03.660024323Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:05:03.660302 containerd[2050]: time="2025-02-13T19:05:03.660238427Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:05:03.660434 containerd[2050]: time="2025-02-13T19:05:03.660339239Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:05:03.678193 containerd[2050]: time="2025-02-13T19:05:03.677947319Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:05:03.678193 containerd[2050]: time="2025-02-13T19:05:03.678175331Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:05:03.678378 containerd[2050]: time="2025-02-13T19:05:03.678227231Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:05:03.678378 containerd[2050]: time="2025-02-13T19:05:03.678265547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:05:03.678378 containerd[2050]: time="2025-02-13T19:05:03.678298151Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:05:03.679419 containerd[2050]: time="2025-02-13T19:05:03.678558251Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:05:03.679419 containerd[2050]: time="2025-02-13T19:05:03.679168415Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:05:03.679419 containerd[2050]: time="2025-02-13T19:05:03.679371059Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:05:03.679419 containerd[2050]: time="2025-02-13T19:05:03.679403471Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:05:03.679643 containerd[2050]: time="2025-02-13T19:05:03.679452959Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:05:03.679643 containerd[2050]: time="2025-02-13T19:05:03.679494695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679643 containerd[2050]: time="2025-02-13T19:05:03.679526459Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679643 containerd[2050]: time="2025-02-13T19:05:03.679557875Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679643 containerd[2050]: time="2025-02-13T19:05:03.679602959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679643 containerd[2050]: time="2025-02-13T19:05:03.679638047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679667435Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679696187Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679723187Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679764599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679795487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679824923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.679878 containerd[2050]: time="2025-02-13T19:05:03.679857563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680267 containerd[2050]: time="2025-02-13T19:05:03.679886363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680267 containerd[2050]: time="2025-02-13T19:05:03.679915847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680267 containerd[2050]: time="2025-02-13T19:05:03.679957139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680267 containerd[2050]: time="2025-02-13T19:05:03.680124203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680267 containerd[2050]: time="2025-02-13T19:05:03.680190791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680267 containerd[2050]: time="2025-02-13T19:05:03.680229335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680505 containerd[2050]: time="2025-02-13T19:05:03.680286371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680505 containerd[2050]: time="2025-02-13T19:05:03.680319983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680505 containerd[2050]: time="2025-02-13T19:05:03.680375819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.680505 containerd[2050]: time="2025-02-13T19:05:03.680409539Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:05:03.680505 containerd[2050]: time="2025-02-13T19:05:03.680483339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.685023 containerd[2050]: time="2025-02-13T19:05:03.683045675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.685023 containerd[2050]: time="2025-02-13T19:05:03.683139815Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:05:03.685216 containerd[2050]: time="2025-02-13T19:05:03.685073819Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:05:03.686004 containerd[2050]: time="2025-02-13T19:05:03.685243427Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:05:03.690999 containerd[2050]: time="2025-02-13T19:05:03.685312619Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:05:03.690999 containerd[2050]: time="2025-02-13T19:05:03.690141059Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:05:03.690999 containerd[2050]: time="2025-02-13T19:05:03.690313331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.690999 containerd[2050]: time="2025-02-13T19:05:03.690381347Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:05:03.690999 containerd[2050]: time="2025-02-13T19:05:03.690407819Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:05:03.690999 containerd[2050]: time="2025-02-13T19:05:03.690461291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:05:03.693356 containerd[2050]: time="2025-02-13T19:05:03.693197039Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:05:03.693851 containerd[2050]: time="2025-02-13T19:05:03.693342263Z" level=info msg="Connect containerd service" Feb 13 19:05:03.697363 containerd[2050]: time="2025-02-13T19:05:03.693928355Z" level=info msg="using legacy CRI server" Feb 13 19:05:03.697363 containerd[2050]: time="2025-02-13T19:05:03.696070067Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:05:03.697363 containerd[2050]: time="2025-02-13T19:05:03.696480923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:05:03.703678 containerd[2050]: time="2025-02-13T19:05:03.703586135Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:05:03.707051 containerd[2050]: time="2025-02-13T19:05:03.705047423Z" level=info msg="Start subscribing containerd event" Feb 13 19:05:03.707142 containerd[2050]: time="2025-02-13T19:05:03.707080031Z" level=info msg="Start recovering state" Feb 13 19:05:03.707264 containerd[2050]: time="2025-02-13T19:05:03.707221235Z" level=info msg="Start event monitor" Feb 13 19:05:03.707322 containerd[2050]: time="2025-02-13T19:05:03.707261279Z" level=info msg="Start snapshots syncer" Feb 13 19:05:03.707322 containerd[2050]: time="2025-02-13T19:05:03.707286707Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:05:03.707322 containerd[2050]: time="2025-02-13T19:05:03.707305763Z" level=info msg="Start streaming server" Feb 13 19:05:03.717246 containerd[2050]: time="2025-02-13T19:05:03.712276151Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:05:03.717246 containerd[2050]: time="2025-02-13T19:05:03.712604435Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:05:03.722312 containerd[2050]: time="2025-02-13T19:05:03.720297335Z" level=info msg="containerd successfully booted in 0.312190s" Feb 13 19:05:03.722172 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:05:03.759045 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:05:03.858397 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:05:03.961944 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:05:04.058819 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:05:04.159043 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:05:04.259633 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [Registrar] Starting registrar module Feb 13 19:05:04.360155 amazon-ssm-agent[2103]: 2025-02-13 19:05:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:05:04.828340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:04.851095 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:05:05.634499 amazon-ssm-agent[2103]: 2025-02-13 19:05:05 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:05:05.664763 amazon-ssm-agent[2103]: 2025-02-13 19:05:05 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:05:05.664763 amazon-ssm-agent[2103]: 2025-02-13 19:05:05 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:05:05.665305 amazon-ssm-agent[2103]: 2025-02-13 19:05:05 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:05:05.735847 amazon-ssm-agent[2103]: 2025-02-13 19:05:05 INFO [CredentialRefresher] Next credential rotation will be in 31.516659553166665 minutes Feb 13 19:05:05.832876 sshd_keygen[2057]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:05:05.874537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:05:05.891660 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:05:05.908562 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:05:05.909121 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:05:05.920735 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:05:05.957251 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:05:05.972561 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:05:05.983471 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:05:05.986738 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:05:05.989486 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:05:05.992253 systemd[1]: Startup finished in 8.399s (kernel) + 9.927s (userspace) = 18.326s. Feb 13 19:05:06.055856 kubelet[2261]: E0213 19:05:06.055769 2261 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:05:06.061165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:05:06.061555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:05:06.691789 amazon-ssm-agent[2103]: 2025-02-13 19:05:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:05:06.792265 amazon-ssm-agent[2103]: 2025-02-13 19:05:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2295) started Feb 13 19:05:06.893200 amazon-ssm-agent[2103]: 2025-02-13 19:05:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:05:11.359513 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:05:11.366463 systemd[1]: Started sshd@0-172.31.27.136:22-147.75.109.163:36768.service - OpenSSH per-connection server daemon (147.75.109.163:36768). Feb 13 19:05:11.573006 sshd[2305]: Accepted publickey for core from 147.75.109.163 port 36768 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:11.576881 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:11.593801 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:05:11.600472 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:05:11.604544 systemd-logind[2028]: New session 1 of user core. Feb 13 19:05:11.635269 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:05:11.648481 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:05:11.671361 (systemd)[2311]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:05:11.881099 systemd[2311]: Queued start job for default target default.target. Feb 13 19:05:11.882544 systemd[2311]: Created slice app.slice - User Application Slice. Feb 13 19:05:11.882603 systemd[2311]: Reached target paths.target - Paths. Feb 13 19:05:11.882636 systemd[2311]: Reached target timers.target - Timers. Feb 13 19:05:11.889153 systemd[2311]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:05:11.905666 systemd[2311]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:05:11.906247 systemd[2311]: Reached target sockets.target - Sockets. Feb 13 19:05:11.906401 systemd[2311]: Reached target basic.target - Basic System. Feb 13 19:05:11.906694 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:05:11.909618 systemd[2311]: Reached target default.target - Main User Target. Feb 13 19:05:11.909717 systemd[2311]: Startup finished in 227ms. Feb 13 19:05:11.912821 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:05:12.063556 systemd[1]: Started sshd@1-172.31.27.136:22-147.75.109.163:36770.service - OpenSSH per-connection server daemon (147.75.109.163:36770). Feb 13 19:05:12.245646 sshd[2323]: Accepted publickey for core from 147.75.109.163 port 36770 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:12.248140 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:12.255515 systemd-logind[2028]: New session 2 of user core. Feb 13 19:05:12.268864 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:05:12.393406 sshd[2326]: Connection closed by 147.75.109.163 port 36770 Feb 13 19:05:12.394266 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:12.399779 systemd-logind[2028]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:05:12.402685 systemd[1]: sshd@1-172.31.27.136:22-147.75.109.163:36770.service: Deactivated successfully. Feb 13 19:05:12.405560 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:05:12.410647 systemd-logind[2028]: Removed session 2. Feb 13 19:05:12.423461 systemd[1]: Started sshd@2-172.31.27.136:22-147.75.109.163:36778.service - OpenSSH per-connection server daemon (147.75.109.163:36778). Feb 13 19:05:12.612816 sshd[2331]: Accepted publickey for core from 147.75.109.163 port 36778 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:12.615503 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:12.622755 systemd-logind[2028]: New session 3 of user core. Feb 13 19:05:12.631453 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:05:12.754999 sshd[2334]: Connection closed by 147.75.109.163 port 36778 Feb 13 19:05:12.755689 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:12.762298 systemd-logind[2028]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:05:12.763436 systemd[1]: sshd@2-172.31.27.136:22-147.75.109.163:36778.service: Deactivated successfully. Feb 13 19:05:12.768875 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:05:12.770654 systemd-logind[2028]: Removed session 3. Feb 13 19:05:12.789412 systemd[1]: Started sshd@3-172.31.27.136:22-147.75.109.163:36792.service - OpenSSH per-connection server daemon (147.75.109.163:36792). Feb 13 19:05:12.971528 sshd[2339]: Accepted publickey for core from 147.75.109.163 port 36792 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:12.973906 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:12.982408 systemd-logind[2028]: New session 4 of user core. Feb 13 19:05:12.989565 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:05:13.117072 sshd[2342]: Connection closed by 147.75.109.163 port 36792 Feb 13 19:05:13.117954 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:13.122651 systemd-logind[2028]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:05:13.126669 systemd[1]: sshd@3-172.31.27.136:22-147.75.109.163:36792.service: Deactivated successfully. Feb 13 19:05:13.131399 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:05:13.132865 systemd-logind[2028]: Removed session 4. Feb 13 19:05:13.150582 systemd[1]: Started sshd@4-172.31.27.136:22-147.75.109.163:36794.service - OpenSSH per-connection server daemon (147.75.109.163:36794). Feb 13 19:05:13.335909 sshd[2347]: Accepted publickey for core from 147.75.109.163 port 36794 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:13.337714 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:13.345737 systemd-logind[2028]: New session 5 of user core. Feb 13 19:05:13.352588 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:05:13.476136 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:05:13.476739 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:05:13.496468 sudo[2351]: pam_unix(sudo:session): session closed for user root Feb 13 19:05:13.520022 sshd[2350]: Connection closed by 147.75.109.163 port 36794 Feb 13 19:05:13.521112 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:13.527366 systemd[1]: sshd@4-172.31.27.136:22-147.75.109.163:36794.service: Deactivated successfully. Feb 13 19:05:13.533324 systemd-logind[2028]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:05:13.534710 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:05:13.536769 systemd-logind[2028]: Removed session 5. Feb 13 19:05:13.554415 systemd[1]: Started sshd@5-172.31.27.136:22-147.75.109.163:36804.service - OpenSSH per-connection server daemon (147.75.109.163:36804). Feb 13 19:05:13.735117 sshd[2356]: Accepted publickey for core from 147.75.109.163 port 36804 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:13.737674 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:13.745016 systemd-logind[2028]: New session 6 of user core. Feb 13 19:05:13.754433 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:05:13.860129 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:05:13.861267 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:05:13.867608 sudo[2361]: pam_unix(sudo:session): session closed for user root Feb 13 19:05:13.877598 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:05:13.878247 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:05:13.907600 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:05:13.953216 augenrules[2383]: No rules Feb 13 19:05:13.956472 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:05:13.956997 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:05:13.959806 sudo[2360]: pam_unix(sudo:session): session closed for user root Feb 13 19:05:13.983565 sshd[2359]: Connection closed by 147.75.109.163 port 36804 Feb 13 19:05:13.984595 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:13.989039 systemd[1]: sshd@5-172.31.27.136:22-147.75.109.163:36804.service: Deactivated successfully. Feb 13 19:05:13.995719 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:05:13.996140 systemd-logind[2028]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:05:13.999072 systemd-logind[2028]: Removed session 6. Feb 13 19:05:14.013461 systemd[1]: Started sshd@6-172.31.27.136:22-147.75.109.163:36808.service - OpenSSH per-connection server daemon (147.75.109.163:36808). Feb 13 19:05:14.196650 sshd[2392]: Accepted publickey for core from 147.75.109.163 port 36808 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:14.198984 sshd-session[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:14.207048 systemd-logind[2028]: New session 7 of user core. Feb 13 19:05:14.213451 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:05:14.317402 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:05:14.318083 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:05:15.390550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:15.401470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:15.449458 systemd[1]: Reloading requested from client PID 2433 ('systemctl') (unit session-7.scope)... Feb 13 19:05:15.449500 systemd[1]: Reloading... Feb 13 19:05:15.673029 zram_generator::config[2473]: No configuration found. Feb 13 19:05:15.936049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:05:16.088990 systemd[1]: Reloading finished in 638 ms. Feb 13 19:05:16.154715 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:05:16.154927 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:05:16.155860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:16.174681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:05:16.472292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:05:16.491575 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:05:16.566035 kubelet[2545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:05:16.566035 kubelet[2545]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:05:16.566035 kubelet[2545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:05:16.569027 kubelet[2545]: I0213 19:05:16.567808 2545 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:05:17.904794 kubelet[2545]: I0213 19:05:17.904732 2545 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:05:17.904794 kubelet[2545]: I0213 19:05:17.904778 2545 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:05:17.905463 kubelet[2545]: I0213 19:05:17.905134 2545 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:05:17.939423 kubelet[2545]: I0213 19:05:17.939205 2545 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:05:17.952667 kubelet[2545]: I0213 19:05:17.952614 2545 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:05:17.954482 kubelet[2545]: I0213 19:05:17.953854 2545 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:05:17.954482 kubelet[2545]: I0213 19:05:17.953921 2545 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.27.136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:05:17.956910 kubelet[2545]: I0213 19:05:17.956856 2545 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:05:17.957119 kubelet[2545]: I0213 19:05:17.957101 2545 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:05:17.958122 kubelet[2545]: I0213 19:05:17.958084 2545 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:05:17.961034 kubelet[2545]: I0213 19:05:17.960277 2545 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:05:17.961034 kubelet[2545]: I0213 19:05:17.960365 2545 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:05:17.961034 kubelet[2545]: I0213 19:05:17.960463 2545 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:05:17.961034 kubelet[2545]: I0213 19:05:17.960515 2545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:05:17.961034 kubelet[2545]: E0213 19:05:17.960922 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:17.961835 kubelet[2545]: E0213 19:05:17.961786 2545 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:17.963472 kubelet[2545]: I0213 19:05:17.963427 2545 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:05:17.963848 kubelet[2545]: I0213 19:05:17.963805 2545 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:05:17.963947 kubelet[2545]: W0213 19:05:17.963910 2545 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:05:17.967015 kubelet[2545]: I0213 19:05:17.966071 2545 server.go:1264] "Started kubelet" Feb 13 19:05:17.968371 kubelet[2545]: I0213 19:05:17.968290 2545 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:05:17.971001 kubelet[2545]: I0213 19:05:17.970063 2545 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:05:17.971001 kubelet[2545]: I0213 19:05:17.970277 2545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:05:17.971001 kubelet[2545]: I0213 19:05:17.970781 2545 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:05:17.976206 kubelet[2545]: I0213 19:05:17.976149 2545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:05:17.983092 kubelet[2545]: E0213 19:05:17.983051 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:17.983306 kubelet[2545]: I0213 19:05:17.983287 2545 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:05:17.983550 kubelet[2545]: I0213 19:05:17.983527 2545 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:05:17.985420 kubelet[2545]: I0213 19:05:17.985387 2545 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:05:17.986101 kubelet[2545]: E0213 19:05:17.986067 2545 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:05:17.990264 kubelet[2545]: I0213 19:05:17.990227 2545 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:05:17.990593 kubelet[2545]: I0213 19:05:17.990562 2545 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:05:17.994232 kubelet[2545]: I0213 19:05:17.994192 2545 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:05:18.008859 kubelet[2545]: W0213 19:05:18.008811 2545 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.27.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:05:18.009016 kubelet[2545]: E0213 19:05:18.008867 2545 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:05:18.011198 kubelet[2545]: E0213 19:05:18.008939 2545 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.27.136.1823d9f2ce9085cc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.27.136,UID:172.31.27.136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.27.136,},FirstTimestamp:2025-02-13 19:05:17.96603438 +0000 UTC m=+1.468641140,LastTimestamp:2025-02-13 19:05:17.96603438 +0000 UTC m=+1.468641140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.136,}" Feb 13 19:05:18.031327 kubelet[2545]: W0213 19:05:18.031266 2545 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:05:18.031327 kubelet[2545]: E0213 19:05:18.031329 2545 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:05:18.032017 kubelet[2545]: E0213 19:05:18.031682 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.27.136\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:05:18.032017 kubelet[2545]: E0213 19:05:18.031789 2545 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.27.136.1823d9f2cfc1efaf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.27.136,UID:172.31.27.136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.27.136,},FirstTimestamp:2025-02-13 19:05:17.986049967 +0000 UTC m=+1.488656667,LastTimestamp:2025-02-13 19:05:17.986049967 +0000 UTC m=+1.488656667,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.136,}" Feb 13 19:05:18.032240 kubelet[2545]: W0213 19:05:18.032101 2545 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:05:18.032240 kubelet[2545]: E0213 19:05:18.032136 2545 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:05:18.045894 kubelet[2545]: E0213 19:05:18.045761 2545 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.27.136.1823d9f2d3300cd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.27.136,UID:172.31.27.136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.27.136 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.27.136,},FirstTimestamp:2025-02-13 19:05:18.043598039 +0000 UTC m=+1.546204703,LastTimestamp:2025-02-13 19:05:18.043598039 +0000 UTC m=+1.546204703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.136,}" Feb 13 19:05:18.046383 kubelet[2545]: I0213 19:05:18.046301 2545 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:05:18.046383 kubelet[2545]: I0213 19:05:18.046328 2545 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:05:18.046383 kubelet[2545]: I0213 19:05:18.046358 2545 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:05:18.050254 kubelet[2545]: I0213 19:05:18.050196 2545 policy_none.go:49] "None policy: Start" Feb 13 19:05:18.053232 kubelet[2545]: I0213 19:05:18.053179 2545 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:05:18.053232 kubelet[2545]: I0213 19:05:18.053232 2545 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:05:18.074015 kubelet[2545]: I0213 19:05:18.073382 2545 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:05:18.074015 kubelet[2545]: I0213 19:05:18.073939 2545 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:05:18.074927 kubelet[2545]: I0213 19:05:18.074884 2545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:05:18.086158 kubelet[2545]: E0213 19:05:18.086103 2545 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.136\" not found" Feb 13 19:05:18.089542 kubelet[2545]: I0213 19:05:18.089482 2545 kubelet_node_status.go:73] "Attempting to register node" node="172.31.27.136" Feb 13 19:05:18.103340 kubelet[2545]: I0213 19:05:18.103301 2545 kubelet_node_status.go:76] "Successfully registered node" node="172.31.27.136" Feb 13 19:05:18.104892 kubelet[2545]: I0213 19:05:18.104841 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:05:18.107732 kubelet[2545]: I0213 19:05:18.107337 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:05:18.107732 kubelet[2545]: I0213 19:05:18.107415 2545 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:05:18.107732 kubelet[2545]: I0213 19:05:18.107447 2545 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:05:18.107732 kubelet[2545]: E0213 19:05:18.107516 2545 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:05:18.132796 kubelet[2545]: E0213 19:05:18.132753 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.232898 kubelet[2545]: E0213 19:05:18.232842 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.334017 kubelet[2545]: E0213 19:05:18.333939 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.434868 kubelet[2545]: E0213 19:05:18.434825 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.536016 kubelet[2545]: E0213 19:05:18.535857 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.636581 kubelet[2545]: E0213 19:05:18.636525 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.737173 kubelet[2545]: E0213 19:05:18.737121 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.837869 kubelet[2545]: E0213 19:05:18.837743 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.908850 kubelet[2545]: I0213 19:05:18.908454 2545 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:05:18.908850 kubelet[2545]: W0213 19:05:18.908642 2545 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:05:18.908850 kubelet[2545]: W0213 19:05:18.908699 2545 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:05:18.938649 kubelet[2545]: E0213 19:05:18.938599 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:18.961859 kubelet[2545]: E0213 19:05:18.961813 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:19.025611 sudo[2396]: pam_unix(sudo:session): session closed for user root Feb 13 19:05:19.039753 kubelet[2545]: E0213 19:05:19.039705 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:19.049032 sshd[2395]: Connection closed by 147.75.109.163 port 36808 Feb 13 19:05:19.049848 sshd-session[2392]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:19.054922 systemd[1]: sshd@6-172.31.27.136:22-147.75.109.163:36808.service: Deactivated successfully. Feb 13 19:05:19.064083 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:05:19.065923 systemd-logind[2028]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:05:19.067857 systemd-logind[2028]: Removed session 7. Feb 13 19:05:19.140276 kubelet[2545]: E0213 19:05:19.140105 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.27.136\" not found" Feb 13 19:05:19.241915 kubelet[2545]: I0213 19:05:19.241767 2545 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:05:19.242562 containerd[2050]: time="2025-02-13T19:05:19.242320357Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:05:19.243150 kubelet[2545]: I0213 19:05:19.242729 2545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:05:19.962238 kubelet[2545]: E0213 19:05:19.962169 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:19.963517 kubelet[2545]: I0213 19:05:19.963216 2545 apiserver.go:52] "Watching apiserver" Feb 13 19:05:19.970066 kubelet[2545]: I0213 19:05:19.970019 2545 topology_manager.go:215] "Topology Admit Handler" podUID="9c52a028-7f84-4037-80f8-38a5af30c11f" podNamespace="calico-system" podName="calico-node-dd4qj" Feb 13 19:05:19.972013 kubelet[2545]: I0213 19:05:19.970326 2545 topology_manager.go:215] "Topology Admit Handler" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" podNamespace="calico-system" podName="csi-node-driver-z7mmf" Feb 13 19:05:19.972013 kubelet[2545]: I0213 19:05:19.970447 2545 topology_manager.go:215] "Topology Admit Handler" podUID="15082f0a-bcfa-4ca5-a01d-6d22f7a8044a" podNamespace="kube-system" podName="kube-proxy-stwzg" Feb 13 19:05:19.972013 kubelet[2545]: E0213 19:05:19.971933 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:19.988671 kubelet[2545]: I0213 19:05:19.988631 2545 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:05:19.996953 kubelet[2545]: I0213 19:05:19.996884 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-var-lib-calico\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.997401 kubelet[2545]: I0213 19:05:19.997171 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15082f0a-bcfa-4ca5-a01d-6d22f7a8044a-xtables-lock\") pod \"kube-proxy-stwzg\" (UID: \"15082f0a-bcfa-4ca5-a01d-6d22f7a8044a\") " pod="kube-system/kube-proxy-stwzg" Feb 13 19:05:19.997401 kubelet[2545]: I0213 19:05:19.997246 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk6tn\" (UniqueName: \"kubernetes.io/projected/15082f0a-bcfa-4ca5-a01d-6d22f7a8044a-kube-api-access-qk6tn\") pod \"kube-proxy-stwzg\" (UID: \"15082f0a-bcfa-4ca5-a01d-6d22f7a8044a\") " pod="kube-system/kube-proxy-stwzg" Feb 13 19:05:19.997401 kubelet[2545]: I0213 19:05:19.997289 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-xtables-lock\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.997401 kubelet[2545]: I0213 19:05:19.997355 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-policysync\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.997883 kubelet[2545]: I0213 19:05:19.997635 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c52a028-7f84-4037-80f8-38a5af30c11f-tigera-ca-bundle\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.997883 kubelet[2545]: I0213 19:05:19.997725 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c52a028-7f84-4037-80f8-38a5af30c11f-node-certs\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.997883 kubelet[2545]: I0213 19:05:19.997766 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-flexvol-driver-host\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.997883 kubelet[2545]: I0213 19:05:19.997831 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860b9edd-e240-4254-867a-e12f2e2a94c5-kubelet-dir\") pod \"csi-node-driver-z7mmf\" (UID: \"860b9edd-e240-4254-867a-e12f2e2a94c5\") " pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:19.998399 kubelet[2545]: I0213 19:05:19.998153 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/860b9edd-e240-4254-867a-e12f2e2a94c5-registration-dir\") pod \"csi-node-driver-z7mmf\" (UID: \"860b9edd-e240-4254-867a-e12f2e2a94c5\") " pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:19.998399 kubelet[2545]: I0213 19:05:19.998203 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwnzm\" (UniqueName: \"kubernetes.io/projected/860b9edd-e240-4254-867a-e12f2e2a94c5-kube-api-access-pwnzm\") pod \"csi-node-driver-z7mmf\" (UID: \"860b9edd-e240-4254-867a-e12f2e2a94c5\") " pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:19.998399 kubelet[2545]: I0213 19:05:19.998264 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-cni-log-dir\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.998399 kubelet[2545]: I0213 19:05:19.998326 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhtz\" (UniqueName: \"kubernetes.io/projected/9c52a028-7f84-4037-80f8-38a5af30c11f-kube-api-access-lfhtz\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.998765 kubelet[2545]: I0213 19:05:19.998376 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/860b9edd-e240-4254-867a-e12f2e2a94c5-varrun\") pod \"csi-node-driver-z7mmf\" (UID: \"860b9edd-e240-4254-867a-e12f2e2a94c5\") " pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:19.998765 kubelet[2545]: I0213 19:05:19.998642 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/860b9edd-e240-4254-867a-e12f2e2a94c5-socket-dir\") pod \"csi-node-driver-z7mmf\" (UID: \"860b9edd-e240-4254-867a-e12f2e2a94c5\") " pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:19.998765 kubelet[2545]: I0213 19:05:19.998717 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15082f0a-bcfa-4ca5-a01d-6d22f7a8044a-kube-proxy\") pod \"kube-proxy-stwzg\" (UID: \"15082f0a-bcfa-4ca5-a01d-6d22f7a8044a\") " pod="kube-system/kube-proxy-stwzg" Feb 13 19:05:19.999242 kubelet[2545]: I0213 19:05:19.998997 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15082f0a-bcfa-4ca5-a01d-6d22f7a8044a-lib-modules\") pod \"kube-proxy-stwzg\" (UID: \"15082f0a-bcfa-4ca5-a01d-6d22f7a8044a\") " pod="kube-system/kube-proxy-stwzg" Feb 13 19:05:19.999242 kubelet[2545]: I0213 19:05:19.999101 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-lib-modules\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.999242 kubelet[2545]: I0213 19:05:19.999164 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-var-run-calico\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.999242 kubelet[2545]: I0213 19:05:19.999206 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-cni-bin-dir\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:19.999721 kubelet[2545]: I0213 19:05:19.999486 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c52a028-7f84-4037-80f8-38a5af30c11f-cni-net-dir\") pod \"calico-node-dd4qj\" (UID: \"9c52a028-7f84-4037-80f8-38a5af30c11f\") " pod="calico-system/calico-node-dd4qj" Feb 13 19:05:20.103575 kubelet[2545]: E0213 19:05:20.103521 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.103575 kubelet[2545]: W0213 19:05:20.103557 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.103775 kubelet[2545]: E0213 19:05:20.103624 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.104187 kubelet[2545]: E0213 19:05:20.104139 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.104256 kubelet[2545]: W0213 19:05:20.104186 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.105001 kubelet[2545]: E0213 19:05:20.104652 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.105001 kubelet[2545]: W0213 19:05:20.104719 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.105184 kubelet[2545]: E0213 19:05:20.105110 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.105184 kubelet[2545]: W0213 19:05:20.105127 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.105184 kubelet[2545]: E0213 19:05:20.105175 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.105540 kubelet[2545]: E0213 19:05:20.105462 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.105540 kubelet[2545]: E0213 19:05:20.105502 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.105540 kubelet[2545]: E0213 19:05:20.105510 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.105830 kubelet[2545]: W0213 19:05:20.105526 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.105830 kubelet[2545]: E0213 19:05:20.105587 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.106109 kubelet[2545]: E0213 19:05:20.105932 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.106109 kubelet[2545]: W0213 19:05:20.105996 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.106109 kubelet[2545]: E0213 19:05:20.106031 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.107253 kubelet[2545]: E0213 19:05:20.106571 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.107253 kubelet[2545]: W0213 19:05:20.106590 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.107253 kubelet[2545]: E0213 19:05:20.106630 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.107253 kubelet[2545]: E0213 19:05:20.107023 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.107253 kubelet[2545]: W0213 19:05:20.107039 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.107253 kubelet[2545]: E0213 19:05:20.107164 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.108213 kubelet[2545]: E0213 19:05:20.107488 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.108213 kubelet[2545]: W0213 19:05:20.107526 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.108213 kubelet[2545]: E0213 19:05:20.107553 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.108635 kubelet[2545]: E0213 19:05:20.108300 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.108635 kubelet[2545]: W0213 19:05:20.108445 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.108754 kubelet[2545]: E0213 19:05:20.108475 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.110010 kubelet[2545]: E0213 19:05:20.109344 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.110010 kubelet[2545]: W0213 19:05:20.109404 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.110010 kubelet[2545]: E0213 19:05:20.109433 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.110507 kubelet[2545]: E0213 19:05:20.110439 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.110507 kubelet[2545]: W0213 19:05:20.110499 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.110615 kubelet[2545]: E0213 19:05:20.110528 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.116428 kubelet[2545]: E0213 19:05:20.116369 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.116544 kubelet[2545]: W0213 19:05:20.116406 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.116544 kubelet[2545]: E0213 19:05:20.116470 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.137978 kubelet[2545]: E0213 19:05:20.136915 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.138470 kubelet[2545]: W0213 19:05:20.138337 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.138470 kubelet[2545]: E0213 19:05:20.138394 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.148119 kubelet[2545]: E0213 19:05:20.148022 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.148119 kubelet[2545]: W0213 19:05:20.148058 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.148119 kubelet[2545]: E0213 19:05:20.148091 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.156693 kubelet[2545]: E0213 19:05:20.156641 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:20.156693 kubelet[2545]: W0213 19:05:20.156679 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:20.156848 kubelet[2545]: E0213 19:05:20.156732 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:20.280716 containerd[2050]: time="2025-02-13T19:05:20.280104970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stwzg,Uid:15082f0a-bcfa-4ca5-a01d-6d22f7a8044a,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:20.282882 containerd[2050]: time="2025-02-13T19:05:20.282761537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dd4qj,Uid:9c52a028-7f84-4037-80f8-38a5af30c11f,Namespace:calico-system,Attempt:0,}" Feb 13 19:05:20.835015 containerd[2050]: time="2025-02-13T19:05:20.834925729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:20.837514 containerd[2050]: time="2025-02-13T19:05:20.837459595Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:20.839037 containerd[2050]: time="2025-02-13T19:05:20.838989892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:05:20.840909 containerd[2050]: time="2025-02-13T19:05:20.840516335Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:20.841493 containerd[2050]: time="2025-02-13T19:05:20.840859503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:05:20.849791 containerd[2050]: time="2025-02-13T19:05:20.849717974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:05:20.854007 containerd[2050]: time="2025-02-13T19:05:20.852955464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.732415ms" Feb 13 19:05:20.857921 containerd[2050]: time="2025-02-13T19:05:20.857852255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.95152ms" Feb 13 19:05:20.962391 kubelet[2545]: E0213 19:05:20.962323 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:21.054645 containerd[2050]: time="2025-02-13T19:05:21.052328374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:21.054645 containerd[2050]: time="2025-02-13T19:05:21.052443499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:21.054645 containerd[2050]: time="2025-02-13T19:05:21.052468676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:21.054645 containerd[2050]: time="2025-02-13T19:05:21.052649594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:21.057154 containerd[2050]: time="2025-02-13T19:05:21.056920837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:21.057154 containerd[2050]: time="2025-02-13T19:05:21.057084491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:21.057455 containerd[2050]: time="2025-02-13T19:05:21.057131338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:21.057880 containerd[2050]: time="2025-02-13T19:05:21.057717039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:21.129581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112771219.mount: Deactivated successfully. Feb 13 19:05:21.174867 systemd[1]: run-containerd-runc-k8s.io-9d7cffb861147616f11c4142368b5a9fc36b52f762a776bca76f942649562b31-runc.nOujee.mount: Deactivated successfully. Feb 13 19:05:21.231355 containerd[2050]: time="2025-02-13T19:05:21.231284408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stwzg,Uid:15082f0a-bcfa-4ca5-a01d-6d22f7a8044a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d7cffb861147616f11c4142368b5a9fc36b52f762a776bca76f942649562b31\"" Feb 13 19:05:21.234213 containerd[2050]: time="2025-02-13T19:05:21.234064770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dd4qj,Uid:9c52a028-7f84-4037-80f8-38a5af30c11f,Namespace:calico-system,Attempt:0,} returns sandbox id \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\"" Feb 13 19:05:21.238028 containerd[2050]: time="2025-02-13T19:05:21.237939118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:05:21.962683 kubelet[2545]: E0213 19:05:21.962629 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:22.108994 kubelet[2545]: E0213 19:05:22.108202 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:22.592719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87311868.mount: Deactivated successfully. Feb 13 19:05:22.963794 kubelet[2545]: E0213 19:05:22.963749 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:23.082835 containerd[2050]: time="2025-02-13T19:05:23.082601793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:23.084109 containerd[2050]: time="2025-02-13T19:05:23.084043546Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:05:23.086016 containerd[2050]: time="2025-02-13T19:05:23.085147762Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:23.088760 containerd[2050]: time="2025-02-13T19:05:23.088708597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:23.090504 containerd[2050]: time="2025-02-13T19:05:23.090237045Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.852117297s" Feb 13 19:05:23.090504 containerd[2050]: time="2025-02-13T19:05:23.090293917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:05:23.092928 containerd[2050]: time="2025-02-13T19:05:23.092551454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:05:23.095711 containerd[2050]: time="2025-02-13T19:05:23.095093328Z" level=info msg="CreateContainer within sandbox \"9d7cffb861147616f11c4142368b5a9fc36b52f762a776bca76f942649562b31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:05:23.120165 containerd[2050]: time="2025-02-13T19:05:23.120093362Z" level=info msg="CreateContainer within sandbox \"9d7cffb861147616f11c4142368b5a9fc36b52f762a776bca76f942649562b31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b900f9d9f6bb1e7a37158db592798fdfa45719cb8ca684691d8d0dbadf0b4aa\"" Feb 13 19:05:23.121527 containerd[2050]: time="2025-02-13T19:05:23.121463487Z" level=info msg="StartContainer for \"5b900f9d9f6bb1e7a37158db592798fdfa45719cb8ca684691d8d0dbadf0b4aa\"" Feb 13 19:05:23.237144 containerd[2050]: time="2025-02-13T19:05:23.236981966Z" level=info msg="StartContainer for \"5b900f9d9f6bb1e7a37158db592798fdfa45719cb8ca684691d8d0dbadf0b4aa\" returns successfully" Feb 13 19:05:23.964197 kubelet[2545]: E0213 19:05:23.964146 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:24.108683 kubelet[2545]: E0213 19:05:24.108185 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:24.215065 kubelet[2545]: E0213 19:05:24.214618 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.215065 kubelet[2545]: W0213 19:05:24.214652 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.215065 kubelet[2545]: E0213 19:05:24.214681 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.215296 kubelet[2545]: E0213 19:05:24.215087 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.215296 kubelet[2545]: W0213 19:05:24.215108 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.215296 kubelet[2545]: E0213 19:05:24.215135 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.215743 kubelet[2545]: E0213 19:05:24.215477 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.215743 kubelet[2545]: W0213 19:05:24.215541 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.215743 kubelet[2545]: E0213 19:05:24.215563 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.216132 kubelet[2545]: E0213 19:05:24.216091 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.216194 kubelet[2545]: W0213 19:05:24.216132 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.216194 kubelet[2545]: E0213 19:05:24.216157 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.216558 kubelet[2545]: E0213 19:05:24.216511 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.216621 kubelet[2545]: W0213 19:05:24.216558 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.216621 kubelet[2545]: E0213 19:05:24.216581 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.216950 kubelet[2545]: E0213 19:05:24.216923 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.217043 kubelet[2545]: W0213 19:05:24.217005 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.217095 kubelet[2545]: E0213 19:05:24.217050 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.217403 kubelet[2545]: E0213 19:05:24.217375 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.217465 kubelet[2545]: W0213 19:05:24.217407 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.217465 kubelet[2545]: E0213 19:05:24.217428 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.217743 kubelet[2545]: E0213 19:05:24.217718 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.217802 kubelet[2545]: W0213 19:05:24.217762 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.217802 kubelet[2545]: E0213 19:05:24.217784 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.218197 kubelet[2545]: E0213 19:05:24.218170 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.218261 kubelet[2545]: W0213 19:05:24.218210 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.218261 kubelet[2545]: E0213 19:05:24.218235 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.218598 kubelet[2545]: E0213 19:05:24.218572 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.218658 kubelet[2545]: W0213 19:05:24.218596 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.218658 kubelet[2545]: E0213 19:05:24.218631 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.218929 kubelet[2545]: E0213 19:05:24.218905 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.219015 kubelet[2545]: W0213 19:05:24.218928 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.219015 kubelet[2545]: E0213 19:05:24.218948 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.219317 kubelet[2545]: E0213 19:05:24.219291 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.219375 kubelet[2545]: W0213 19:05:24.219316 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.219375 kubelet[2545]: E0213 19:05:24.219338 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.219652 kubelet[2545]: E0213 19:05:24.219628 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.219710 kubelet[2545]: W0213 19:05:24.219651 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.219710 kubelet[2545]: E0213 19:05:24.219671 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.219951 kubelet[2545]: E0213 19:05:24.219926 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.220030 kubelet[2545]: W0213 19:05:24.219950 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.220081 kubelet[2545]: E0213 19:05:24.220044 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.220361 kubelet[2545]: E0213 19:05:24.220336 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.220437 kubelet[2545]: W0213 19:05:24.220360 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.220437 kubelet[2545]: E0213 19:05:24.220381 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.220667 kubelet[2545]: E0213 19:05:24.220643 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.220725 kubelet[2545]: W0213 19:05:24.220666 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.220725 kubelet[2545]: E0213 19:05:24.220686 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.221039 kubelet[2545]: E0213 19:05:24.221013 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.221098 kubelet[2545]: W0213 19:05:24.221038 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.221098 kubelet[2545]: E0213 19:05:24.221059 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.221342 kubelet[2545]: E0213 19:05:24.221318 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.221400 kubelet[2545]: W0213 19:05:24.221341 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.221400 kubelet[2545]: E0213 19:05:24.221362 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.221645 kubelet[2545]: E0213 19:05:24.221621 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.221702 kubelet[2545]: W0213 19:05:24.221644 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.221702 kubelet[2545]: E0213 19:05:24.221664 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.221942 kubelet[2545]: E0213 19:05:24.221918 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.222035 kubelet[2545]: W0213 19:05:24.221943 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.222087 kubelet[2545]: E0213 19:05:24.222029 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.227633 kubelet[2545]: E0213 19:05:24.227433 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.227633 kubelet[2545]: W0213 19:05:24.227460 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.227633 kubelet[2545]: E0213 19:05:24.227483 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.227922 kubelet[2545]: E0213 19:05:24.227893 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.228039 kubelet[2545]: W0213 19:05:24.227921 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.228039 kubelet[2545]: E0213 19:05:24.227957 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.228347 kubelet[2545]: E0213 19:05:24.228320 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.228422 kubelet[2545]: W0213 19:05:24.228346 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.228422 kubelet[2545]: E0213 19:05:24.228380 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.228666 kubelet[2545]: E0213 19:05:24.228641 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.228726 kubelet[2545]: W0213 19:05:24.228665 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.228726 kubelet[2545]: E0213 19:05:24.228703 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.229058 kubelet[2545]: E0213 19:05:24.229034 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.229118 kubelet[2545]: W0213 19:05:24.229058 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.229210 kubelet[2545]: E0213 19:05:24.229180 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.229522 kubelet[2545]: E0213 19:05:24.229495 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.229586 kubelet[2545]: W0213 19:05:24.229521 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.229586 kubelet[2545]: E0213 19:05:24.229550 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.229924 kubelet[2545]: E0213 19:05:24.229899 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.230024 kubelet[2545]: W0213 19:05:24.229923 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.230024 kubelet[2545]: E0213 19:05:24.229982 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.230393 kubelet[2545]: E0213 19:05:24.230368 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.230455 kubelet[2545]: W0213 19:05:24.230395 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.230455 kubelet[2545]: E0213 19:05:24.230433 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.230821 kubelet[2545]: E0213 19:05:24.230794 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.230885 kubelet[2545]: W0213 19:05:24.230819 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.231043 kubelet[2545]: E0213 19:05:24.231012 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.231578 kubelet[2545]: E0213 19:05:24.231550 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.231655 kubelet[2545]: W0213 19:05:24.231577 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.231655 kubelet[2545]: E0213 19:05:24.231613 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.232289 kubelet[2545]: E0213 19:05:24.232083 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.232289 kubelet[2545]: W0213 19:05:24.232106 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.232289 kubelet[2545]: E0213 19:05:24.232139 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.232655 kubelet[2545]: E0213 19:05:24.232573 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:05:24.232655 kubelet[2545]: W0213 19:05:24.232595 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:05:24.232655 kubelet[2545]: E0213 19:05:24.232616 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:05:24.473887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602098600.mount: Deactivated successfully. Feb 13 19:05:24.601329 containerd[2050]: time="2025-02-13T19:05:24.601040652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:24.602469 containerd[2050]: time="2025-02-13T19:05:24.602385396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:05:24.603600 containerd[2050]: time="2025-02-13T19:05:24.603515545Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:24.607183 containerd[2050]: time="2025-02-13T19:05:24.607104486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:24.608783 containerd[2050]: time="2025-02-13T19:05:24.608598633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.515983187s" Feb 13 19:05:24.608783 containerd[2050]: time="2025-02-13T19:05:24.608650583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:05:24.612627 containerd[2050]: time="2025-02-13T19:05:24.612563362Z" level=info msg="CreateContainer within sandbox \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:05:24.634382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192816008.mount: Deactivated successfully. Feb 13 19:05:24.636242 containerd[2050]: time="2025-02-13T19:05:24.636169536Z" level=info msg="CreateContainer within sandbox \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ee6f898abbf367b1a2cf4594668b85c0726994bcf80b87ebfc0553f37b2a50d0\"" Feb 13 19:05:24.637131 containerd[2050]: time="2025-02-13T19:05:24.636875753Z" level=info msg="StartContainer for \"ee6f898abbf367b1a2cf4594668b85c0726994bcf80b87ebfc0553f37b2a50d0\"" Feb 13 19:05:24.737619 containerd[2050]: time="2025-02-13T19:05:24.737382578Z" level=info msg="StartContainer for \"ee6f898abbf367b1a2cf4594668b85c0726994bcf80b87ebfc0553f37b2a50d0\" returns successfully" Feb 13 19:05:24.965045 kubelet[2545]: E0213 19:05:24.964936 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:25.030662 containerd[2050]: time="2025-02-13T19:05:25.030370232Z" level=info msg="shim disconnected" id=ee6f898abbf367b1a2cf4594668b85c0726994bcf80b87ebfc0553f37b2a50d0 namespace=k8s.io Feb 13 19:05:25.030662 containerd[2050]: time="2025-02-13T19:05:25.030442821Z" level=warning msg="cleaning up after shim disconnected" id=ee6f898abbf367b1a2cf4594668b85c0726994bcf80b87ebfc0553f37b2a50d0 namespace=k8s.io Feb 13 19:05:25.030662 containerd[2050]: time="2025-02-13T19:05:25.030464011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:25.168766 containerd[2050]: time="2025-02-13T19:05:25.168425366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:05:25.209679 kubelet[2545]: I0213 19:05:25.209531 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stwzg" podStartSLOduration=5.3528506 podStartE2EDuration="7.209510559s" podCreationTimestamp="2025-02-13 19:05:18 +0000 UTC" firstStartedPulling="2025-02-13 19:05:21.235591297 +0000 UTC m=+4.738197961" lastFinishedPulling="2025-02-13 19:05:23.092251196 +0000 UTC m=+6.594857920" observedRunningTime="2025-02-13 19:05:24.210016743 +0000 UTC m=+7.712623431" watchObservedRunningTime="2025-02-13 19:05:25.209510559 +0000 UTC m=+8.712117235" Feb 13 19:05:25.428163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee6f898abbf367b1a2cf4594668b85c0726994bcf80b87ebfc0553f37b2a50d0-rootfs.mount: Deactivated successfully. Feb 13 19:05:25.965900 kubelet[2545]: E0213 19:05:25.965831 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:26.109356 kubelet[2545]: E0213 19:05:26.108829 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:26.966444 kubelet[2545]: E0213 19:05:26.966378 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:27.967063 kubelet[2545]: E0213 19:05:27.966836 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:28.111705 kubelet[2545]: E0213 19:05:28.111084 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:28.753914 containerd[2050]: time="2025-02-13T19:05:28.753860109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:28.755947 containerd[2050]: time="2025-02-13T19:05:28.755871414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:05:28.755947 containerd[2050]: time="2025-02-13T19:05:28.756069633Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:28.761040 containerd[2050]: time="2025-02-13T19:05:28.760932579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:28.762614 containerd[2050]: time="2025-02-13T19:05:28.762312441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.593825808s" Feb 13 19:05:28.762614 containerd[2050]: time="2025-02-13T19:05:28.762360909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:05:28.767096 containerd[2050]: time="2025-02-13T19:05:28.767046743Z" level=info msg="CreateContainer within sandbox \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:05:28.789337 containerd[2050]: time="2025-02-13T19:05:28.789160198Z" level=info msg="CreateContainer within sandbox \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6a9873c6ccfbd2b6fc5ae2d26c96df4935b155212ed76e12bcc638b06c34a222\"" Feb 13 19:05:28.791151 containerd[2050]: time="2025-02-13T19:05:28.790122479Z" level=info msg="StartContainer for \"6a9873c6ccfbd2b6fc5ae2d26c96df4935b155212ed76e12bcc638b06c34a222\"" Feb 13 19:05:28.897337 containerd[2050]: time="2025-02-13T19:05:28.897118415Z" level=info msg="StartContainer for \"6a9873c6ccfbd2b6fc5ae2d26c96df4935b155212ed76e12bcc638b06c34a222\" returns successfully" Feb 13 19:05:28.967679 kubelet[2545]: E0213 19:05:28.967631 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:29.818518 containerd[2050]: time="2025-02-13T19:05:29.818415172Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:05:29.861287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9873c6ccfbd2b6fc5ae2d26c96df4935b155212ed76e12bcc638b06c34a222-rootfs.mount: Deactivated successfully. Feb 13 19:05:29.874045 kubelet[2545]: I0213 19:05:29.874012 2545 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:05:29.969168 kubelet[2545]: E0213 19:05:29.969106 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:30.116735 containerd[2050]: time="2025-02-13T19:05:30.116488580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:0,}" Feb 13 19:05:30.575372 containerd[2050]: time="2025-02-13T19:05:30.574423788Z" level=error msg="Failed to destroy network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:30.577531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca-shm.mount: Deactivated successfully. Feb 13 19:05:30.578472 containerd[2050]: time="2025-02-13T19:05:30.578398123Z" level=error msg="encountered an error cleaning up failed sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:30.579662 containerd[2050]: time="2025-02-13T19:05:30.579299521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:30.580285 kubelet[2545]: E0213 19:05:30.580209 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:30.580408 kubelet[2545]: E0213 19:05:30.580313 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:30.580408 kubelet[2545]: E0213 19:05:30.580351 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:30.580522 kubelet[2545]: E0213 19:05:30.580417 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:30.720159 containerd[2050]: time="2025-02-13T19:05:30.720031801Z" level=info msg="shim disconnected" id=6a9873c6ccfbd2b6fc5ae2d26c96df4935b155212ed76e12bcc638b06c34a222 namespace=k8s.io Feb 13 19:05:30.720159 containerd[2050]: time="2025-02-13T19:05:30.720107055Z" level=warning msg="cleaning up after shim disconnected" id=6a9873c6ccfbd2b6fc5ae2d26c96df4935b155212ed76e12bcc638b06c34a222 namespace=k8s.io Feb 13 19:05:30.720159 containerd[2050]: time="2025-02-13T19:05:30.720127321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:30.969914 kubelet[2545]: E0213 19:05:30.969832 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:31.198216 containerd[2050]: time="2025-02-13T19:05:31.197781532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:05:31.201193 kubelet[2545]: I0213 19:05:31.201149 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca" Feb 13 19:05:31.204988 containerd[2050]: time="2025-02-13T19:05:31.202238123Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:31.204988 containerd[2050]: time="2025-02-13T19:05:31.202500718Z" level=info msg="Ensure that sandbox a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca in task-service has been cleanup successfully" Feb 13 19:05:31.205502 systemd[1]: run-netns-cni\x2d484458c9\x2da0e3\x2d6dd9\x2d6102\x2da94df570e3c6.mount: Deactivated successfully. Feb 13 19:05:31.209255 containerd[2050]: time="2025-02-13T19:05:31.206718654Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:31.209255 containerd[2050]: time="2025-02-13T19:05:31.208348120Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:31.209494 containerd[2050]: time="2025-02-13T19:05:31.209429741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:1,}" Feb 13 19:05:31.319120 containerd[2050]: time="2025-02-13T19:05:31.318931725Z" level=error msg="Failed to destroy network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:31.319824 containerd[2050]: time="2025-02-13T19:05:31.319545808Z" level=error msg="encountered an error cleaning up failed sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:31.319824 containerd[2050]: time="2025-02-13T19:05:31.319635589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:31.322222 kubelet[2545]: E0213 19:05:31.319997 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:31.322222 kubelet[2545]: E0213 19:05:31.320070 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:31.322222 kubelet[2545]: E0213 19:05:31.320107 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:31.322422 kubelet[2545]: E0213 19:05:31.320166 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:31.323401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49-shm.mount: Deactivated successfully. Feb 13 19:05:31.971023 kubelet[2545]: E0213 19:05:31.970933 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:32.003394 kubelet[2545]: I0213 19:05:32.003294 2545 topology_manager.go:215] "Topology Admit Handler" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" podNamespace="default" podName="nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:32.178525 kubelet[2545]: I0213 19:05:32.178405 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs98s\" (UniqueName: \"kubernetes.io/projected/5f014e98-1b00-4e1a-9e73-473fa7bd370d-kube-api-access-qs98s\") pod \"nginx-deployment-85f456d6dd-5kj5h\" (UID: \"5f014e98-1b00-4e1a-9e73-473fa7bd370d\") " pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:32.205605 kubelet[2545]: I0213 19:05:32.205563 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49" Feb 13 19:05:32.207204 containerd[2050]: time="2025-02-13T19:05:32.207115910Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:32.211000 containerd[2050]: time="2025-02-13T19:05:32.208221026Z" level=info msg="Ensure that sandbox 536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49 in task-service has been cleanup successfully" Feb 13 19:05:32.211298 systemd[1]: run-netns-cni\x2d6276f8a7\x2d0482\x2d30fa\x2d07d0\x2d7c3dccc8a2b2.mount: Deactivated successfully. Feb 13 19:05:32.212115 containerd[2050]: time="2025-02-13T19:05:32.211317181Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:32.212115 containerd[2050]: time="2025-02-13T19:05:32.211358650Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:32.212777 containerd[2050]: time="2025-02-13T19:05:32.212449755Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:32.212777 containerd[2050]: time="2025-02-13T19:05:32.212600839Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:32.212777 containerd[2050]: time="2025-02-13T19:05:32.212622474Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:32.214614 containerd[2050]: time="2025-02-13T19:05:32.214200578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:2,}" Feb 13 19:05:32.312165 containerd[2050]: time="2025-02-13T19:05:32.312006726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:0,}" Feb 13 19:05:32.340619 containerd[2050]: time="2025-02-13T19:05:32.340244574Z" level=error msg="Failed to destroy network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.341528 containerd[2050]: time="2025-02-13T19:05:32.341364037Z" level=error msg="encountered an error cleaning up failed sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.341528 containerd[2050]: time="2025-02-13T19:05:32.341459041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.342753 kubelet[2545]: E0213 19:05:32.342090 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.342753 kubelet[2545]: E0213 19:05:32.342627 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:32.342753 kubelet[2545]: E0213 19:05:32.342686 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:32.344205 kubelet[2545]: E0213 19:05:32.344109 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:32.463390 containerd[2050]: time="2025-02-13T19:05:32.463175665Z" level=error msg="Failed to destroy network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.465033 containerd[2050]: time="2025-02-13T19:05:32.464350873Z" level=error msg="encountered an error cleaning up failed sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.465033 containerd[2050]: time="2025-02-13T19:05:32.464451147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.465300 kubelet[2545]: E0213 19:05:32.464726 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:32.465300 kubelet[2545]: E0213 19:05:32.464803 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:32.465300 kubelet[2545]: E0213 19:05:32.464856 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:32.465480 kubelet[2545]: E0213 19:05:32.464926 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5kj5h" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" Feb 13 19:05:32.971330 kubelet[2545]: E0213 19:05:32.971246 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:33.216897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a-shm.mount: Deactivated successfully. Feb 13 19:05:33.221592 kubelet[2545]: I0213 19:05:33.217171 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f" Feb 13 19:05:33.219368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f-shm.mount: Deactivated successfully. Feb 13 19:05:33.226002 containerd[2050]: time="2025-02-13T19:05:33.223243655Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:05:33.226002 containerd[2050]: time="2025-02-13T19:05:33.223641810Z" level=info msg="Ensure that sandbox 46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f in task-service has been cleanup successfully" Feb 13 19:05:33.227744 kubelet[2545]: I0213 19:05:33.227692 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a" Feb 13 19:05:33.228417 systemd[1]: run-netns-cni\x2df233cedc\x2d4ea2\x2d1dfa\x2db43a\x2dc851ddd89204.mount: Deactivated successfully. Feb 13 19:05:33.229704 containerd[2050]: time="2025-02-13T19:05:33.229122153Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:05:33.229704 containerd[2050]: time="2025-02-13T19:05:33.229162553Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:05:33.229860 containerd[2050]: time="2025-02-13T19:05:33.229803793Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:33.230452 containerd[2050]: time="2025-02-13T19:05:33.229949078Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:33.230452 containerd[2050]: time="2025-02-13T19:05:33.230024716Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:33.232144 containerd[2050]: time="2025-02-13T19:05:33.232077514Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:05:33.232382 containerd[2050]: time="2025-02-13T19:05:33.232339761Z" level=info msg="Ensure that sandbox f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a in task-service has been cleanup successfully" Feb 13 19:05:33.233331 containerd[2050]: time="2025-02-13T19:05:33.232345283Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:33.233432 containerd[2050]: time="2025-02-13T19:05:33.233403984Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:33.233507 containerd[2050]: time="2025-02-13T19:05:33.233426124Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:33.236101 containerd[2050]: time="2025-02-13T19:05:33.233934902Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:05:33.236101 containerd[2050]: time="2025-02-13T19:05:33.234318457Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:05:33.239324 containerd[2050]: time="2025-02-13T19:05:33.238846004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:3,}" Feb 13 19:05:33.239324 containerd[2050]: time="2025-02-13T19:05:33.239231361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:1,}" Feb 13 19:05:33.240548 systemd[1]: run-netns-cni\x2defa5cf85\x2deb5a\x2d1018\x2d0a53\x2d3097d68f7adc.mount: Deactivated successfully. Feb 13 19:05:33.442598 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:05:33.506218 containerd[2050]: time="2025-02-13T19:05:33.505905513Z" level=error msg="Failed to destroy network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.507374 containerd[2050]: time="2025-02-13T19:05:33.507116930Z" level=error msg="encountered an error cleaning up failed sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.507374 containerd[2050]: time="2025-02-13T19:05:33.507218561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.508547 kubelet[2545]: E0213 19:05:33.508397 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.508547 kubelet[2545]: E0213 19:05:33.508482 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:33.508547 kubelet[2545]: E0213 19:05:33.508524 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:33.508840 kubelet[2545]: E0213 19:05:33.508594 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:33.523856 containerd[2050]: time="2025-02-13T19:05:33.523692246Z" level=error msg="Failed to destroy network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.524642 containerd[2050]: time="2025-02-13T19:05:33.524502927Z" level=error msg="encountered an error cleaning up failed sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.524805 containerd[2050]: time="2025-02-13T19:05:33.524595818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.525695 kubelet[2545]: E0213 19:05:33.525187 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:33.525695 kubelet[2545]: E0213 19:05:33.525268 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:33.525695 kubelet[2545]: E0213 19:05:33.525306 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:33.525942 kubelet[2545]: E0213 19:05:33.525393 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5kj5h" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" Feb 13 19:05:33.972221 kubelet[2545]: E0213 19:05:33.972167 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:34.215283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6-shm.mount: Deactivated successfully. Feb 13 19:05:34.240552 kubelet[2545]: I0213 19:05:34.240430 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a" Feb 13 19:05:34.244460 containerd[2050]: time="2025-02-13T19:05:34.243944449Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:05:34.245057 containerd[2050]: time="2025-02-13T19:05:34.244758300Z" level=info msg="Ensure that sandbox f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a in task-service has been cleanup successfully" Feb 13 19:05:34.245142 containerd[2050]: time="2025-02-13T19:05:34.245080769Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:05:34.245142 containerd[2050]: time="2025-02-13T19:05:34.245108443Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:05:34.248875 systemd[1]: run-netns-cni\x2db4dce725\x2d9685\x2d69f7\x2dc4ad\x2d8c98f921b62a.mount: Deactivated successfully. Feb 13 19:05:34.249607 containerd[2050]: time="2025-02-13T19:05:34.249357619Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:05:34.249607 containerd[2050]: time="2025-02-13T19:05:34.249512580Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:05:34.249607 containerd[2050]: time="2025-02-13T19:05:34.249534719Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:05:34.251918 kubelet[2545]: I0213 19:05:34.251039 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6" Feb 13 19:05:34.253709 containerd[2050]: time="2025-02-13T19:05:34.253458316Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:05:34.254823 containerd[2050]: time="2025-02-13T19:05:34.254764329Z" level=info msg="Ensure that sandbox 68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6 in task-service has been cleanup successfully" Feb 13 19:05:34.255935 containerd[2050]: time="2025-02-13T19:05:34.255878017Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:34.259498 containerd[2050]: time="2025-02-13T19:05:34.257206181Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:05:34.259498 containerd[2050]: time="2025-02-13T19:05:34.257266511Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:05:34.259498 containerd[2050]: time="2025-02-13T19:05:34.257537066Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:34.261711 systemd[1]: run-netns-cni\x2dd34b7d95\x2d7f40\x2d26f2\x2d1714\x2d905af7b2f382.mount: Deactivated successfully. Feb 13 19:05:34.262100 containerd[2050]: time="2025-02-13T19:05:34.257559962Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:34.262298 containerd[2050]: time="2025-02-13T19:05:34.261252047Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:05:34.262355 containerd[2050]: time="2025-02-13T19:05:34.262317663Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:05:34.262355 containerd[2050]: time="2025-02-13T19:05:34.262338722Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:05:34.264591 containerd[2050]: time="2025-02-13T19:05:34.264119092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:2,}" Feb 13 19:05:34.265633 containerd[2050]: time="2025-02-13T19:05:34.264799652Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:34.266545 containerd[2050]: time="2025-02-13T19:05:34.265777072Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:34.266545 containerd[2050]: time="2025-02-13T19:05:34.265814639Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:34.266877 containerd[2050]: time="2025-02-13T19:05:34.266824067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:4,}" Feb 13 19:05:34.496465 containerd[2050]: time="2025-02-13T19:05:34.496307149Z" level=error msg="Failed to destroy network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.499084 containerd[2050]: time="2025-02-13T19:05:34.498925850Z" level=error msg="encountered an error cleaning up failed sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.499084 containerd[2050]: time="2025-02-13T19:05:34.499072587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.499516 kubelet[2545]: E0213 19:05:34.499426 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.499665 kubelet[2545]: E0213 19:05:34.499508 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:34.499665 kubelet[2545]: E0213 19:05:34.499544 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:34.499665 kubelet[2545]: E0213 19:05:34.499604 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:34.514565 containerd[2050]: time="2025-02-13T19:05:34.514453210Z" level=error msg="Failed to destroy network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.515748 containerd[2050]: time="2025-02-13T19:05:34.515691077Z" level=error msg="encountered an error cleaning up failed sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.515901 containerd[2050]: time="2025-02-13T19:05:34.515791243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.516210 kubelet[2545]: E0213 19:05:34.516154 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:34.516532 kubelet[2545]: E0213 19:05:34.516228 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:34.516532 kubelet[2545]: E0213 19:05:34.516262 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:34.516532 kubelet[2545]: E0213 19:05:34.516335 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5kj5h" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" Feb 13 19:05:34.973188 kubelet[2545]: E0213 19:05:34.973122 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:35.214915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563-shm.mount: Deactivated successfully. Feb 13 19:05:35.277080 kubelet[2545]: I0213 19:05:35.276643 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9" Feb 13 19:05:35.278863 containerd[2050]: time="2025-02-13T19:05:35.278781624Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:05:35.285993 kubelet[2545]: I0213 19:05:35.284701 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563" Feb 13 19:05:35.287227 containerd[2050]: time="2025-02-13T19:05:35.287182151Z" level=info msg="Ensure that sandbox 97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9 in task-service has been cleanup successfully" Feb 13 19:05:35.287682 containerd[2050]: time="2025-02-13T19:05:35.287647467Z" level=info msg="TearDown network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" successfully" Feb 13 19:05:35.287795 containerd[2050]: time="2025-02-13T19:05:35.287769244Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" returns successfully" Feb 13 19:05:35.290663 containerd[2050]: time="2025-02-13T19:05:35.290616467Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:05:35.291798 systemd[1]: run-netns-cni\x2d0b72d362\x2d2003\x2d1863\x2d2a06\x2d83aec0a51383.mount: Deactivated successfully. Feb 13 19:05:35.293021 containerd[2050]: time="2025-02-13T19:05:35.290999038Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:05:35.293338 containerd[2050]: time="2025-02-13T19:05:35.293308404Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:05:35.293478 containerd[2050]: time="2025-02-13T19:05:35.293441503Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:05:35.293937 containerd[2050]: time="2025-02-13T19:05:35.293901525Z" level=info msg="Ensure that sandbox 68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563 in task-service has been cleanup successfully" Feb 13 19:05:35.296990 containerd[2050]: time="2025-02-13T19:05:35.294686849Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:05:35.298739 containerd[2050]: time="2025-02-13T19:05:35.297372627Z" level=info msg="TearDown network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" successfully" Feb 13 19:05:35.298739 containerd[2050]: time="2025-02-13T19:05:35.297757528Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" returns successfully" Feb 13 19:05:35.299880 containerd[2050]: time="2025-02-13T19:05:35.299830244Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:05:35.300653 containerd[2050]: time="2025-02-13T19:05:35.300607248Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:05:35.300795 containerd[2050]: time="2025-02-13T19:05:35.300767816Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:05:35.301047 containerd[2050]: time="2025-02-13T19:05:35.300184397Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:05:35.301155 containerd[2050]: time="2025-02-13T19:05:35.301129137Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:05:35.301491 systemd[1]: run-netns-cni\x2d70829a88\x2d7a53\x2d7e1a\x2d3f71\x2d6adfb6856032.mount: Deactivated successfully. Feb 13 19:05:35.302656 containerd[2050]: time="2025-02-13T19:05:35.302617797Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:05:35.303428 containerd[2050]: time="2025-02-13T19:05:35.303084818Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:05:35.303428 containerd[2050]: time="2025-02-13T19:05:35.303117366Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:05:35.303428 containerd[2050]: time="2025-02-13T19:05:35.302898893Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:35.303428 containerd[2050]: time="2025-02-13T19:05:35.303326259Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:35.303428 containerd[2050]: time="2025-02-13T19:05:35.303349226Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:35.306313 containerd[2050]: time="2025-02-13T19:05:35.305528952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:3,}" Feb 13 19:05:35.307654 containerd[2050]: time="2025-02-13T19:05:35.307595953Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:35.307958 containerd[2050]: time="2025-02-13T19:05:35.307927618Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:35.308111 containerd[2050]: time="2025-02-13T19:05:35.308083924Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:35.311205 containerd[2050]: time="2025-02-13T19:05:35.311098379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:5,}" Feb 13 19:05:35.505893 containerd[2050]: time="2025-02-13T19:05:35.505820141Z" level=error msg="Failed to destroy network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.509062 containerd[2050]: time="2025-02-13T19:05:35.508200078Z" level=error msg="encountered an error cleaning up failed sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.509062 containerd[2050]: time="2025-02-13T19:05:35.508885224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.509675 kubelet[2545]: E0213 19:05:35.509597 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.509770 kubelet[2545]: E0213 19:05:35.509699 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:35.509770 kubelet[2545]: E0213 19:05:35.509735 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:35.509992 kubelet[2545]: E0213 19:05:35.509796 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5kj5h" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" Feb 13 19:05:35.548845 containerd[2050]: time="2025-02-13T19:05:35.548687900Z" level=error msg="Failed to destroy network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.551830 containerd[2050]: time="2025-02-13T19:05:35.551769048Z" level=error msg="encountered an error cleaning up failed sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.552077 containerd[2050]: time="2025-02-13T19:05:35.552038318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.552504 kubelet[2545]: E0213 19:05:35.552435 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:35.552658 kubelet[2545]: E0213 19:05:35.552529 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:35.552658 kubelet[2545]: E0213 19:05:35.552573 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:35.552813 kubelet[2545]: E0213 19:05:35.552650 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:35.974108 kubelet[2545]: E0213 19:05:35.974032 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:36.215396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea-shm.mount: Deactivated successfully. Feb 13 19:05:36.293733 kubelet[2545]: I0213 19:05:36.292664 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea" Feb 13 19:05:36.293849 containerd[2050]: time="2025-02-13T19:05:36.293460077Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" Feb 13 19:05:36.295083 containerd[2050]: time="2025-02-13T19:05:36.293835817Z" level=info msg="Ensure that sandbox c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea in task-service has been cleanup successfully" Feb 13 19:05:36.295781 containerd[2050]: time="2025-02-13T19:05:36.295178076Z" level=info msg="TearDown network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" successfully" Feb 13 19:05:36.295861 containerd[2050]: time="2025-02-13T19:05:36.295774534Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" returns successfully" Feb 13 19:05:36.300406 containerd[2050]: time="2025-02-13T19:05:36.300344954Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:05:36.300553 containerd[2050]: time="2025-02-13T19:05:36.300519749Z" level=info msg="TearDown network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" successfully" Feb 13 19:05:36.300553 containerd[2050]: time="2025-02-13T19:05:36.300543353Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" returns successfully" Feb 13 19:05:36.301217 systemd[1]: run-netns-cni\x2d5f2c3dad\x2d82ea\x2d7a41\x2dcebe\x2d3b3d7638e415.mount: Deactivated successfully. Feb 13 19:05:36.306324 containerd[2050]: time="2025-02-13T19:05:36.305528436Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:05:36.306464 containerd[2050]: time="2025-02-13T19:05:36.306373755Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:05:36.306464 containerd[2050]: time="2025-02-13T19:05:36.306401741Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:05:36.308535 containerd[2050]: time="2025-02-13T19:05:36.308334911Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:05:36.308535 containerd[2050]: time="2025-02-13T19:05:36.308507065Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:05:36.308535 containerd[2050]: time="2025-02-13T19:05:36.308531941Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:05:36.311852 containerd[2050]: time="2025-02-13T19:05:36.311573878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:4,}" Feb 13 19:05:36.319612 kubelet[2545]: I0213 19:05:36.319561 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd" Feb 13 19:05:36.321652 containerd[2050]: time="2025-02-13T19:05:36.321260126Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" Feb 13 19:05:36.321652 containerd[2050]: time="2025-02-13T19:05:36.321552689Z" level=info msg="Ensure that sandbox d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd in task-service has been cleanup successfully" Feb 13 19:05:36.326652 containerd[2050]: time="2025-02-13T19:05:36.324240232Z" level=info msg="TearDown network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" successfully" Feb 13 19:05:36.326652 containerd[2050]: time="2025-02-13T19:05:36.324331165Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" returns successfully" Feb 13 19:05:36.329459 containerd[2050]: time="2025-02-13T19:05:36.327798906Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:05:36.328456 systemd[1]: run-netns-cni\x2d70b2c3a7\x2d63f8\x2d0df1\x2db092\x2d2ea9c4702d83.mount: Deactivated successfully. Feb 13 19:05:36.332468 containerd[2050]: time="2025-02-13T19:05:36.332403868Z" level=info msg="TearDown network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" successfully" Feb 13 19:05:36.332468 containerd[2050]: time="2025-02-13T19:05:36.332460416Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" returns successfully" Feb 13 19:05:36.336899 containerd[2050]: time="2025-02-13T19:05:36.336654664Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:05:36.337570 containerd[2050]: time="2025-02-13T19:05:36.337072101Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:05:36.337930 containerd[2050]: time="2025-02-13T19:05:36.337686628Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:05:36.340573 containerd[2050]: time="2025-02-13T19:05:36.339832940Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:05:36.341404 containerd[2050]: time="2025-02-13T19:05:36.341236550Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:05:36.341996 containerd[2050]: time="2025-02-13T19:05:36.341838339Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:05:36.343813 containerd[2050]: time="2025-02-13T19:05:36.343708585Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:36.348399 containerd[2050]: time="2025-02-13T19:05:36.348344558Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:36.348913 containerd[2050]: time="2025-02-13T19:05:36.348684568Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:36.351117 containerd[2050]: time="2025-02-13T19:05:36.349752970Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:36.351117 containerd[2050]: time="2025-02-13T19:05:36.349931487Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:36.353203 containerd[2050]: time="2025-02-13T19:05:36.353128121Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:36.359351 containerd[2050]: time="2025-02-13T19:05:36.357557266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:6,}" Feb 13 19:05:36.567863 containerd[2050]: time="2025-02-13T19:05:36.567710604Z" level=error msg="Failed to destroy network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.569555 containerd[2050]: time="2025-02-13T19:05:36.569085027Z" level=error msg="Failed to destroy network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.571191 containerd[2050]: time="2025-02-13T19:05:36.570820662Z" level=error msg="encountered an error cleaning up failed sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.571191 containerd[2050]: time="2025-02-13T19:05:36.570934599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.571803 kubelet[2545]: E0213 19:05:36.571637 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.571803 kubelet[2545]: E0213 19:05:36.571714 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:36.571803 kubelet[2545]: E0213 19:05:36.571748 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:36.572526 containerd[2050]: time="2025-02-13T19:05:36.571541766Z" level=error msg="encountered an error cleaning up failed sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.572526 containerd[2050]: time="2025-02-13T19:05:36.572457872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.572873 kubelet[2545]: E0213 19:05:36.572093 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:36.572873 kubelet[2545]: E0213 19:05:36.572784 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:36.573160 kubelet[2545]: E0213 19:05:36.572843 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:36.573241 kubelet[2545]: E0213 19:05:36.573165 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:36.573548 kubelet[2545]: E0213 19:05:36.573380 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5kj5h" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" Feb 13 19:05:36.975234 kubelet[2545]: E0213 19:05:36.975084 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:37.214946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480-shm.mount: Deactivated successfully. Feb 13 19:05:37.332554 kubelet[2545]: I0213 19:05:37.331918 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32" Feb 13 19:05:37.336033 containerd[2050]: time="2025-02-13T19:05:37.335939636Z" level=info msg="StopPodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\"" Feb 13 19:05:37.337940 containerd[2050]: time="2025-02-13T19:05:37.337113006Z" level=info msg="Ensure that sandbox 21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32 in task-service has been cleanup successfully" Feb 13 19:05:37.342879 containerd[2050]: time="2025-02-13T19:05:37.340637680Z" level=info msg="TearDown network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" successfully" Feb 13 19:05:37.342879 containerd[2050]: time="2025-02-13T19:05:37.340688549Z" level=info msg="StopPodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" returns successfully" Feb 13 19:05:37.343041 systemd[1]: run-netns-cni\x2df934d21d\x2de002\x2dc19b\x2d8182\x2da5ca71ce155c.mount: Deactivated successfully. Feb 13 19:05:37.346275 containerd[2050]: time="2025-02-13T19:05:37.345890101Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" Feb 13 19:05:37.347099 containerd[2050]: time="2025-02-13T19:05:37.346453698Z" level=info msg="TearDown network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" successfully" Feb 13 19:05:37.347099 containerd[2050]: time="2025-02-13T19:05:37.346490353Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" returns successfully" Feb 13 19:05:37.349034 containerd[2050]: time="2025-02-13T19:05:37.348956589Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:05:37.349315 containerd[2050]: time="2025-02-13T19:05:37.349285445Z" level=info msg="TearDown network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" successfully" Feb 13 19:05:37.349417 containerd[2050]: time="2025-02-13T19:05:37.349391302Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" returns successfully" Feb 13 19:05:37.350672 containerd[2050]: time="2025-02-13T19:05:37.350628773Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:05:37.351057 containerd[2050]: time="2025-02-13T19:05:37.350935094Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:05:37.351198 containerd[2050]: time="2025-02-13T19:05:37.351167350Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:05:37.352021 containerd[2050]: time="2025-02-13T19:05:37.351809599Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:05:37.352194 containerd[2050]: time="2025-02-13T19:05:37.351952975Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:05:37.352537 containerd[2050]: time="2025-02-13T19:05:37.352269993Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:05:37.353762 kubelet[2545]: I0213 19:05:37.353715 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480" Feb 13 19:05:37.355376 containerd[2050]: time="2025-02-13T19:05:37.354668312Z" level=info msg="StopPodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\"" Feb 13 19:05:37.355528 containerd[2050]: time="2025-02-13T19:05:37.355371755Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:37.355582 containerd[2050]: time="2025-02-13T19:05:37.355521350Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:37.355582 containerd[2050]: time="2025-02-13T19:05:37.355543921Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:37.356305 containerd[2050]: time="2025-02-13T19:05:37.356041354Z" level=info msg="Ensure that sandbox f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480 in task-service has been cleanup successfully" Feb 13 19:05:37.356853 containerd[2050]: time="2025-02-13T19:05:37.356491074Z" level=info msg="TearDown network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" successfully" Feb 13 19:05:37.356853 containerd[2050]: time="2025-02-13T19:05:37.356523635Z" level=info msg="StopPodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" returns successfully" Feb 13 19:05:37.358411 containerd[2050]: time="2025-02-13T19:05:37.358356591Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" Feb 13 19:05:37.358555 containerd[2050]: time="2025-02-13T19:05:37.358517039Z" level=info msg="TearDown network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" successfully" Feb 13 19:05:37.358622 containerd[2050]: time="2025-02-13T19:05:37.358551172Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" returns successfully" Feb 13 19:05:37.358717 containerd[2050]: time="2025-02-13T19:05:37.358679720Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:37.358847 containerd[2050]: time="2025-02-13T19:05:37.358809241Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:37.358942 containerd[2050]: time="2025-02-13T19:05:37.358840072Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:37.360825 systemd[1]: run-netns-cni\x2d9e30f49c\x2d9bc4\x2d66c2\x2d456b\x2dfd3819a5496f.mount: Deactivated successfully. Feb 13 19:05:37.363708 containerd[2050]: time="2025-02-13T19:05:37.362729380Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:05:37.363708 containerd[2050]: time="2025-02-13T19:05:37.362908462Z" level=info msg="TearDown network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" successfully" Feb 13 19:05:37.363708 containerd[2050]: time="2025-02-13T19:05:37.362932137Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" returns successfully" Feb 13 19:05:37.363708 containerd[2050]: time="2025-02-13T19:05:37.363157970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:7,}" Feb 13 19:05:37.365354 containerd[2050]: time="2025-02-13T19:05:37.365310478Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:05:37.365639 containerd[2050]: time="2025-02-13T19:05:37.365611697Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:05:37.366321 containerd[2050]: time="2025-02-13T19:05:37.365720003Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:05:37.367734 containerd[2050]: time="2025-02-13T19:05:37.367689647Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:05:37.368113 containerd[2050]: time="2025-02-13T19:05:37.368082808Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:05:37.368234 containerd[2050]: time="2025-02-13T19:05:37.368208282Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:05:37.370158 containerd[2050]: time="2025-02-13T19:05:37.370111749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:5,}" Feb 13 19:05:37.602130 containerd[2050]: time="2025-02-13T19:05:37.601228704Z" level=error msg="Failed to destroy network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.602130 containerd[2050]: time="2025-02-13T19:05:37.601790537Z" level=error msg="encountered an error cleaning up failed sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.602130 containerd[2050]: time="2025-02-13T19:05:37.601870256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.603737 kubelet[2545]: E0213 19:05:37.602877 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.603737 kubelet[2545]: E0213 19:05:37.603238 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:37.603737 kubelet[2545]: E0213 19:05:37.603338 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-5kj5h" Feb 13 19:05:37.604026 kubelet[2545]: E0213 19:05:37.603452 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-5kj5h_default(5f014e98-1b00-4e1a-9e73-473fa7bd370d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-5kj5h" podUID="5f014e98-1b00-4e1a-9e73-473fa7bd370d" Feb 13 19:05:37.609998 containerd[2050]: time="2025-02-13T19:05:37.609813366Z" level=error msg="Failed to destroy network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.611028 containerd[2050]: time="2025-02-13T19:05:37.610350671Z" level=error msg="encountered an error cleaning up failed sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.611028 containerd[2050]: time="2025-02-13T19:05:37.610441388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.611221 kubelet[2545]: E0213 19:05:37.610731 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:05:37.611221 kubelet[2545]: E0213 19:05:37.610805 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:37.611221 kubelet[2545]: E0213 19:05:37.610846 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7mmf" Feb 13 19:05:37.611436 kubelet[2545]: E0213 19:05:37.610909 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7mmf_calico-system(860b9edd-e240-4254-867a-e12f2e2a94c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7mmf" podUID="860b9edd-e240-4254-867a-e12f2e2a94c5" Feb 13 19:05:37.961574 kubelet[2545]: E0213 19:05:37.961426 2545 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:37.975881 kubelet[2545]: E0213 19:05:37.975835 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:38.136993 containerd[2050]: time="2025-02-13T19:05:38.135611878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:38.137783 containerd[2050]: time="2025-02-13T19:05:38.137695243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:05:38.139812 containerd[2050]: time="2025-02-13T19:05:38.139743911Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:38.144150 containerd[2050]: time="2025-02-13T19:05:38.144075555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:38.146513 containerd[2050]: time="2025-02-13T19:05:38.145561790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.947722846s" Feb 13 19:05:38.146513 containerd[2050]: time="2025-02-13T19:05:38.145614641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:05:38.158518 containerd[2050]: time="2025-02-13T19:05:38.158270838Z" level=info msg="CreateContainer within sandbox \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:05:38.177404 containerd[2050]: time="2025-02-13T19:05:38.177280466Z" level=info msg="CreateContainer within sandbox \"09d526b9a8aa6a6d29aa0303c63078f098166f791ae5ca5a141db7f6105dd4ec\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"14dd8f37c4d5c0d800a016f2b8dfa2f464a584f42c32890f6d76da62ba48c199\"" Feb 13 19:05:38.178159 containerd[2050]: time="2025-02-13T19:05:38.178077832Z" level=info msg="StartContainer for \"14dd8f37c4d5c0d800a016f2b8dfa2f464a584f42c32890f6d76da62ba48c199\"" Feb 13 19:05:38.218368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931-shm.mount: Deactivated successfully. Feb 13 19:05:38.218655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4-shm.mount: Deactivated successfully. Feb 13 19:05:38.218893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60960142.mount: Deactivated successfully. Feb 13 19:05:38.296200 containerd[2050]: time="2025-02-13T19:05:38.296067903Z" level=info msg="StartContainer for \"14dd8f37c4d5c0d800a016f2b8dfa2f464a584f42c32890f6d76da62ba48c199\" returns successfully" Feb 13 19:05:38.377208 kubelet[2545]: I0213 19:05:38.376228 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931" Feb 13 19:05:38.377693 containerd[2050]: time="2025-02-13T19:05:38.377652035Z" level=info msg="StopPodSandbox for \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\"" Feb 13 19:05:38.381002 containerd[2050]: time="2025-02-13T19:05:38.378595154Z" level=info msg="Ensure that sandbox e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931 in task-service has been cleanup successfully" Feb 13 19:05:38.382488 systemd[1]: run-netns-cni\x2d316706ab\x2d7d78\x2d3c99\x2d164d\x2d3e33d5a6eb5f.mount: Deactivated successfully. Feb 13 19:05:38.383289 containerd[2050]: time="2025-02-13T19:05:38.383089048Z" level=info msg="TearDown network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\" successfully" Feb 13 19:05:38.383289 containerd[2050]: time="2025-02-13T19:05:38.383137492Z" level=info msg="StopPodSandbox for \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\" returns successfully" Feb 13 19:05:38.385931 containerd[2050]: time="2025-02-13T19:05:38.385641320Z" level=info msg="StopPodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\"" Feb 13 19:05:38.385931 containerd[2050]: time="2025-02-13T19:05:38.385809596Z" level=info msg="TearDown network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" successfully" Feb 13 19:05:38.385931 containerd[2050]: time="2025-02-13T19:05:38.385832251Z" level=info msg="StopPodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" returns successfully" Feb 13 19:05:38.387305 containerd[2050]: time="2025-02-13T19:05:38.386720647Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" Feb 13 19:05:38.387305 containerd[2050]: time="2025-02-13T19:05:38.386882080Z" level=info msg="TearDown network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" successfully" Feb 13 19:05:38.387305 containerd[2050]: time="2025-02-13T19:05:38.386904903Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" returns successfully" Feb 13 19:05:38.389006 containerd[2050]: time="2025-02-13T19:05:38.388921971Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:05:38.389153 containerd[2050]: time="2025-02-13T19:05:38.389098099Z" level=info msg="TearDown network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" successfully" Feb 13 19:05:38.389153 containerd[2050]: time="2025-02-13T19:05:38.389121547Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" returns successfully" Feb 13 19:05:38.390382 containerd[2050]: time="2025-02-13T19:05:38.390337286Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:05:38.392832 containerd[2050]: time="2025-02-13T19:05:38.391617594Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:05:38.392832 containerd[2050]: time="2025-02-13T19:05:38.391658535Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:05:38.401388 containerd[2050]: time="2025-02-13T19:05:38.401324962Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:05:38.403271 containerd[2050]: time="2025-02-13T19:05:38.403121972Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:05:38.403271 containerd[2050]: time="2025-02-13T19:05:38.403162252Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:05:38.409334 containerd[2050]: time="2025-02-13T19:05:38.409271757Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:05:38.409498 containerd[2050]: time="2025-02-13T19:05:38.409446024Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:05:38.409498 containerd[2050]: time="2025-02-13T19:05:38.409468884Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:05:38.413344 kubelet[2545]: I0213 19:05:38.411887 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4" Feb 13 19:05:38.419264 containerd[2050]: time="2025-02-13T19:05:38.418398922Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:05:38.419264 containerd[2050]: time="2025-02-13T19:05:38.418706828Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:05:38.423612 containerd[2050]: time="2025-02-13T19:05:38.418817560Z" level=info msg="StopPodSandbox for \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\"" Feb 13 19:05:38.423612 containerd[2050]: time="2025-02-13T19:05:38.422354827Z" level=info msg="Ensure that sandbox 46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4 in task-service has been cleanup successfully" Feb 13 19:05:38.425268 containerd[2050]: time="2025-02-13T19:05:38.418731224Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:05:38.426997 containerd[2050]: time="2025-02-13T19:05:38.426552809Z" level=info msg="TearDown network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\" successfully" Feb 13 19:05:38.426997 containerd[2050]: time="2025-02-13T19:05:38.426624701Z" level=info msg="StopPodSandbox for \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\" returns successfully" Feb 13 19:05:38.433283 systemd[1]: run-netns-cni\x2d5ff005df\x2d5c03\x2da0e5\x2d7725\x2d539ed5336c22.mount: Deactivated successfully. Feb 13 19:05:38.437244 containerd[2050]: time="2025-02-13T19:05:38.437191386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:8,}" Feb 13 19:05:38.439459 containerd[2050]: time="2025-02-13T19:05:38.439201010Z" level=info msg="StopPodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\"" Feb 13 19:05:38.439459 containerd[2050]: time="2025-02-13T19:05:38.439367774Z" level=info msg="TearDown network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" successfully" Feb 13 19:05:38.439459 containerd[2050]: time="2025-02-13T19:05:38.439391041Z" level=info msg="StopPodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" returns successfully" Feb 13 19:05:38.445109 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:05:38.445246 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:05:38.445290 containerd[2050]: time="2025-02-13T19:05:38.445147930Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" Feb 13 19:05:38.445425 containerd[2050]: time="2025-02-13T19:05:38.445400525Z" level=info msg="TearDown network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" successfully" Feb 13 19:05:38.445490 containerd[2050]: time="2025-02-13T19:05:38.445425101Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" returns successfully" Feb 13 19:05:38.450865 containerd[2050]: time="2025-02-13T19:05:38.450419825Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:05:38.452309 containerd[2050]: time="2025-02-13T19:05:38.452247691Z" level=info msg="TearDown network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" successfully" Feb 13 19:05:38.452923 containerd[2050]: time="2025-02-13T19:05:38.452869301Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" returns successfully" Feb 13 19:05:38.454840 containerd[2050]: time="2025-02-13T19:05:38.454775614Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:05:38.455367 containerd[2050]: time="2025-02-13T19:05:38.454950265Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:05:38.455367 containerd[2050]: time="2025-02-13T19:05:38.455016442Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:05:38.456774 containerd[2050]: time="2025-02-13T19:05:38.456687101Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:05:38.457713 containerd[2050]: time="2025-02-13T19:05:38.457534617Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:05:38.457713 containerd[2050]: time="2025-02-13T19:05:38.457699447Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:05:38.470080 containerd[2050]: time="2025-02-13T19:05:38.462130826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:6,}" Feb 13 19:05:38.898925 (udev-worker)[3521]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:38.914266 systemd-networkd[1608]: calie4bb4edb379: Link UP Feb 13 19:05:38.914925 systemd-networkd[1608]: calie4bb4edb379: Gained carrier Feb 13 19:05:38.930758 kubelet[2545]: I0213 19:05:38.929784 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dd4qj" podStartSLOduration=4.021695775 podStartE2EDuration="20.929763605s" podCreationTimestamp="2025-02-13 19:05:18 +0000 UTC" firstStartedPulling="2025-02-13 19:05:21.238650714 +0000 UTC m=+4.741257378" lastFinishedPulling="2025-02-13 19:05:38.146718556 +0000 UTC m=+21.649325208" observedRunningTime="2025-02-13 19:05:38.401026468 +0000 UTC m=+21.903633144" watchObservedRunningTime="2025-02-13 19:05:38.929763605 +0000 UTC m=+22.432370281" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.589 [INFO][3537] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.647 [INFO][3537] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.27.136-k8s-csi--node--driver--z7mmf-eth0 csi-node-driver- calico-system 860b9edd-e240-4254-867a-e12f2e2a94c5 895 0 2025-02-13 19:05:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.27.136 csi-node-driver-z7mmf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4bb4edb379 [] []}} ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.650 [INFO][3537] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.750 [INFO][3580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" HandleID="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Workload="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.768 [INFO][3580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" HandleID="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Workload="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ff9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.27.136", "pod":"csi-node-driver-z7mmf", "timestamp":"2025-02-13 19:05:38.750600863 +0000 UTC"}, Hostname:"172.31.27.136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.768 [INFO][3580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.768 [INFO][3580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.768 [INFO][3580] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.27.136' Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.775 [INFO][3580] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.785 [INFO][3580] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.27.136" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.795 [INFO][3580] ipam/ipam.go 489: Trying affinity for 192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.798 [INFO][3580] ipam/ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.807 [INFO][3580] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.807 [INFO][3580] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.811 [INFO][3580] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86 Feb 13 19:05:38.932623 containerd[2050]: 2025-02-13 19:05:38.820 [INFO][3580] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.829 [ERROR][3580] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-4-64-26) Name="192-168-4-64-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-4-64-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.4.64/26", Affinity:(*string)(0x40002f8a60), Allocations:[]*int{(*int)(0x4000602f50), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0x40002ff9e0), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"172.31.27.136", "pod":"csi-node-driver-z7mmf", "timestamp":"2025-02-13 19:05:38.750600863 +0000 UTC"}}}, SequenceNumber:0x1823d9f7a6d998e7, SequenceNumberForAllocation:map[string]uint64{"0":0x1823d9f7a6d998e6}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-4-64-26": the object has been modified; please apply your changes to the latest version and try again Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.829 [INFO][3580] ipam/ipam.go 1207: Failed to update block block=192.168.4.64/26 error=update conflict: IPAMBlock(192-168-4-64-26) handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.860 [INFO][3580] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.863 [INFO][3580] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86 Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.869 [INFO][3580] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.880 [INFO][3580] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.4.65/26] block=192.168.4.64/26 handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.880 [INFO][3580] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.65/26] handle="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" host="172.31.27.136" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.880 [INFO][3580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.880 [INFO][3580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.4.65/26] IPv6=[] ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" HandleID="k8s-pod-network.1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Workload="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.933609 containerd[2050]: 2025-02-13 19:05:38.887 [INFO][3537] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-csi--node--driver--z7mmf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"860b9edd-e240-4254-867a-e12f2e2a94c5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"", Pod:"csi-node-driver-z7mmf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4bb4edb379", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:05:38.935176 containerd[2050]: 2025-02-13 19:05:38.887 [INFO][3537] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.4.65/32] ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.935176 containerd[2050]: 2025-02-13 19:05:38.887 [INFO][3537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4bb4edb379 ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.935176 containerd[2050]: 2025-02-13 19:05:38.912 [INFO][3537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.935176 containerd[2050]: 2025-02-13 19:05:38.913 [INFO][3537] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-csi--node--driver--z7mmf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"860b9edd-e240-4254-867a-e12f2e2a94c5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86", Pod:"csi-node-driver-z7mmf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4bb4edb379", MAC:"36:4c:4a:90:35:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:05:38.935176 containerd[2050]: 2025-02-13 19:05:38.928 [INFO][3537] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86" Namespace="calico-system" Pod="csi-node-driver-z7mmf" WorkloadEndpoint="172.31.27.136-k8s-csi--node--driver--z7mmf-eth0" Feb 13 19:05:38.965480 (udev-worker)[3519]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:38.966351 systemd-networkd[1608]: cali93dbe48ad87: Link UP Feb 13 19:05:38.966780 systemd-networkd[1608]: cali93dbe48ad87: Gained carrier Feb 13 19:05:38.977667 kubelet[2545]: E0213 19:05:38.977549 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.648 [INFO][3553] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.674 [INFO][3553] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0 nginx-deployment-85f456d6dd- default 5f014e98-1b00-4e1a-9e73-473fa7bd370d 1044 0 2025-02-13 19:05:32 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.27.136 nginx-deployment-85f456d6dd-5kj5h eth0 default [] [] [kns.default ksa.default.default] cali93dbe48ad87 [] []}} ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.674 [INFO][3553] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.756 [INFO][3584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" HandleID="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Workload="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.776 [INFO][3584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" HandleID="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Workload="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e00a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.27.136", "pod":"nginx-deployment-85f456d6dd-5kj5h", "timestamp":"2025-02-13 19:05:38.756503817 +0000 UTC"}, Hostname:"172.31.27.136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.776 [INFO][3584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.881 [INFO][3584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.881 [INFO][3584] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.27.136' Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.886 [INFO][3584] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.903 [INFO][3584] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.916 [INFO][3584] ipam/ipam.go 489: Trying affinity for 192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.919 [INFO][3584] ipam/ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.923 [INFO][3584] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.923 [INFO][3584] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.928 [INFO][3584] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162 Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.940 [INFO][3584] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.953 [INFO][3584] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.4.66/26] block=192.168.4.64/26 handle="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.954 [INFO][3584] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.66/26] handle="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" host="172.31.27.136" Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.954 [INFO][3584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:05:38.992420 containerd[2050]: 2025-02-13 19:05:38.954 [INFO][3584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.4.66/26] IPv6=[] ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" HandleID="k8s-pod-network.dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Workload="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.994076 containerd[2050]: 2025-02-13 19:05:38.960 [INFO][3553] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"5f014e98-1b00-4e1a-9e73-473fa7bd370d", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-5kj5h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali93dbe48ad87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:05:38.994076 containerd[2050]: 2025-02-13 19:05:38.960 [INFO][3553] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.4.66/32] ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.994076 containerd[2050]: 2025-02-13 19:05:38.960 [INFO][3553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93dbe48ad87 ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.994076 containerd[2050]: 2025-02-13 19:05:38.967 [INFO][3553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.994076 containerd[2050]: 2025-02-13 19:05:38.967 [INFO][3553] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"5f014e98-1b00-4e1a-9e73-473fa7bd370d", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162", Pod:"nginx-deployment-85f456d6dd-5kj5h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali93dbe48ad87", MAC:"de:42:63:62:a1:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:05:38.994076 containerd[2050]: 2025-02-13 19:05:38.988 [INFO][3553] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162" Namespace="default" Pod="nginx-deployment-85f456d6dd-5kj5h" WorkloadEndpoint="172.31.27.136-k8s-nginx--deployment--85f456d6dd--5kj5h-eth0" Feb 13 19:05:38.994076 containerd[2050]: time="2025-02-13T19:05:38.993039871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:38.994076 containerd[2050]: time="2025-02-13T19:05:38.993223947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:38.994076 containerd[2050]: time="2025-02-13T19:05:38.993251405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:38.995657 containerd[2050]: time="2025-02-13T19:05:38.994667381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:39.038811 containerd[2050]: time="2025-02-13T19:05:39.036771583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:39.038811 containerd[2050]: time="2025-02-13T19:05:39.036875302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:39.038811 containerd[2050]: time="2025-02-13T19:05:39.036912989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:39.038811 containerd[2050]: time="2025-02-13T19:05:39.037091651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:39.080646 containerd[2050]: time="2025-02-13T19:05:39.080596760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7mmf,Uid:860b9edd-e240-4254-867a-e12f2e2a94c5,Namespace:calico-system,Attempt:8,} returns sandbox id \"1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86\"" Feb 13 19:05:39.086383 containerd[2050]: time="2025-02-13T19:05:39.086338618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:05:39.137084 containerd[2050]: time="2025-02-13T19:05:39.136940090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-5kj5h,Uid:5f014e98-1b00-4e1a-9e73-473fa7bd370d,Namespace:default,Attempt:6,} returns sandbox id \"dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162\"" Feb 13 19:05:39.221916 systemd[1]: run-containerd-runc-k8s.io-14dd8f37c4d5c0d800a016f2b8dfa2f464a584f42c32890f6d76da62ba48c199-runc.cPRckE.mount: Deactivated successfully. Feb 13 19:05:39.978119 kubelet[2545]: E0213 19:05:39.978061 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:40.412019 kernel: bpftool[3850]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:05:40.563122 systemd-networkd[1608]: calie4bb4edb379: Gained IPv6LL Feb 13 19:05:40.665913 containerd[2050]: time="2025-02-13T19:05:40.665808261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:40.670475 containerd[2050]: time="2025-02-13T19:05:40.667504097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:05:40.670475 containerd[2050]: time="2025-02-13T19:05:40.668866466Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:40.679992 containerd[2050]: time="2025-02-13T19:05:40.677550237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:40.681502 containerd[2050]: time="2025-02-13T19:05:40.680448438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.593874778s" Feb 13 19:05:40.681502 containerd[2050]: time="2025-02-13T19:05:40.680516272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:05:40.687226 containerd[2050]: time="2025-02-13T19:05:40.686330946Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:05:40.695521 containerd[2050]: time="2025-02-13T19:05:40.695462217Z" level=info msg="CreateContainer within sandbox \"1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:05:40.737868 containerd[2050]: time="2025-02-13T19:05:40.737786333Z" level=info msg="CreateContainer within sandbox \"1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c3c4a7bde7ee880b1cfafcc99e77e0ba91bee8e1581ebd80d6a08d18c58066aa\"" Feb 13 19:05:40.743187 systemd-networkd[1608]: vxlan.calico: Link UP Feb 13 19:05:40.743208 systemd-networkd[1608]: vxlan.calico: Gained carrier Feb 13 19:05:40.754309 containerd[2050]: time="2025-02-13T19:05:40.754256968Z" level=info msg="StartContainer for \"c3c4a7bde7ee880b1cfafcc99e77e0ba91bee8e1581ebd80d6a08d18c58066aa\"" Feb 13 19:05:40.936870 containerd[2050]: time="2025-02-13T19:05:40.933666025Z" level=info msg="StartContainer for \"c3c4a7bde7ee880b1cfafcc99e77e0ba91bee8e1581ebd80d6a08d18c58066aa\" returns successfully" Feb 13 19:05:40.978783 kubelet[2545]: E0213 19:05:40.978737 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:41.009316 systemd-networkd[1608]: cali93dbe48ad87: Gained IPv6LL Feb 13 19:05:41.905543 systemd-networkd[1608]: vxlan.calico: Gained IPv6LL Feb 13 19:05:41.979788 kubelet[2545]: E0213 19:05:41.979724 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:42.981092 kubelet[2545]: E0213 19:05:42.980986 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:43.981668 kubelet[2545]: E0213 19:05:43.981580 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:44.194325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694276105.mount: Deactivated successfully. Feb 13 19:05:44.250314 ntpd[2008]: Listen normally on 6 vxlan.calico 192.168.4.64:123 Feb 13 19:05:44.251889 ntpd[2008]: 13 Feb 19:05:44 ntpd[2008]: Listen normally on 6 vxlan.calico 192.168.4.64:123 Feb 13 19:05:44.251889 ntpd[2008]: 13 Feb 19:05:44 ntpd[2008]: Listen normally on 7 calie4bb4edb379 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:05:44.251889 ntpd[2008]: 13 Feb 19:05:44 ntpd[2008]: Listen normally on 8 cali93dbe48ad87 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:05:44.251889 ntpd[2008]: 13 Feb 19:05:44 ntpd[2008]: Listen normally on 9 vxlan.calico [fe80::647e:5bff:fef0:e327%5]:123 Feb 13 19:05:44.251258 ntpd[2008]: Listen normally on 7 calie4bb4edb379 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:05:44.251357 ntpd[2008]: Listen normally on 8 cali93dbe48ad87 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:05:44.251426 ntpd[2008]: Listen normally on 9 vxlan.calico [fe80::647e:5bff:fef0:e327%5]:123 Feb 13 19:05:44.982013 kubelet[2545]: E0213 19:05:44.981887 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:45.569332 containerd[2050]: time="2025-02-13T19:05:45.569252388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:45.571248 containerd[2050]: time="2025-02-13T19:05:45.571179831Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:05:45.571997 containerd[2050]: time="2025-02-13T19:05:45.571775785Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:45.577521 containerd[2050]: time="2025-02-13T19:05:45.576921041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:45.579591 containerd[2050]: time="2025-02-13T19:05:45.578923942Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 4.887699134s" Feb 13 19:05:45.579591 containerd[2050]: time="2025-02-13T19:05:45.579009628Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:05:45.581645 containerd[2050]: time="2025-02-13T19:05:45.581195249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:05:45.585856 containerd[2050]: time="2025-02-13T19:05:45.585809178Z" level=info msg="CreateContainer within sandbox \"dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:05:45.612056 containerd[2050]: time="2025-02-13T19:05:45.611956206Z" level=info msg="CreateContainer within sandbox \"dcf120237dc868ce974eaefe9f65b14ecf942fadfeb8078ae89d9abdddc42162\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"89df4b3d97b9e6ce8b03c4490e832797078165f531b8399d81ce4f4c9a93f578\"" Feb 13 19:05:45.613389 containerd[2050]: time="2025-02-13T19:05:45.613350607Z" level=info msg="StartContainer for \"89df4b3d97b9e6ce8b03c4490e832797078165f531b8399d81ce4f4c9a93f578\"" Feb 13 19:05:45.722033 containerd[2050]: time="2025-02-13T19:05:45.721934996Z" level=info msg="StartContainer for \"89df4b3d97b9e6ce8b03c4490e832797078165f531b8399d81ce4f4c9a93f578\" returns successfully" Feb 13 19:05:45.982514 kubelet[2545]: E0213 19:05:45.982438 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:46.983190 kubelet[2545]: E0213 19:05:46.983129 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:47.065465 containerd[2050]: time="2025-02-13T19:05:47.065384783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:47.067439 containerd[2050]: time="2025-02-13T19:05:47.067354283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:05:47.068274 containerd[2050]: time="2025-02-13T19:05:47.068197152Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:47.072006 containerd[2050]: time="2025-02-13T19:05:47.071927741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:05:47.074010 containerd[2050]: time="2025-02-13T19:05:47.073555754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.492305626s" Feb 13 19:05:47.074010 containerd[2050]: time="2025-02-13T19:05:47.073617477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:05:47.077609 containerd[2050]: time="2025-02-13T19:05:47.077560260Z" level=info msg="CreateContainer within sandbox \"1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:05:47.101225 containerd[2050]: time="2025-02-13T19:05:47.101040622Z" level=info msg="CreateContainer within sandbox \"1bbf86e5d4cdd691a24271f5f5376c015c94fb17acf569c9ff23b205b8315a86\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0ba19501fa9ff0beeae38566e32070168fb112194e91214f88258aec55a15cee\"" Feb 13 19:05:47.102853 containerd[2050]: time="2025-02-13T19:05:47.101913999Z" level=info msg="StartContainer for \"0ba19501fa9ff0beeae38566e32070168fb112194e91214f88258aec55a15cee\"" Feb 13 19:05:47.105527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404377165.mount: Deactivated successfully. Feb 13 19:05:47.216354 containerd[2050]: time="2025-02-13T19:05:47.216265266Z" level=info msg="StartContainer for \"0ba19501fa9ff0beeae38566e32070168fb112194e91214f88258aec55a15cee\" returns successfully" Feb 13 19:05:47.501248 kubelet[2545]: I0213 19:05:47.501147 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-5kj5h" podStartSLOduration=9.059968702 podStartE2EDuration="15.501126756s" podCreationTimestamp="2025-02-13 19:05:32 +0000 UTC" firstStartedPulling="2025-02-13 19:05:39.139720296 +0000 UTC m=+22.642326960" lastFinishedPulling="2025-02-13 19:05:45.58087835 +0000 UTC m=+29.083485014" observedRunningTime="2025-02-13 19:05:46.47273164 +0000 UTC m=+29.975338316" watchObservedRunningTime="2025-02-13 19:05:47.501126756 +0000 UTC m=+31.003733420" Feb 13 19:05:47.673036 update_engine[2030]: I20250213 19:05:47.672072 2030 update_attempter.cc:509] Updating boot flags... Feb 13 19:05:47.747023 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4104) Feb 13 19:05:47.983684 kubelet[2545]: E0213 19:05:47.983357 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:47.993181 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4106) Feb 13 19:05:48.109173 kubelet[2545]: I0213 19:05:48.108499 2545 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:05:48.109173 kubelet[2545]: I0213 19:05:48.108559 2545 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:05:48.303016 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4106) Feb 13 19:05:48.984129 kubelet[2545]: E0213 19:05:48.984082 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:49.985131 kubelet[2545]: E0213 19:05:49.985067 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:50.985770 kubelet[2545]: E0213 19:05:50.985688 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:51.985923 kubelet[2545]: E0213 19:05:51.985852 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:52.987150 kubelet[2545]: E0213 19:05:52.987072 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:53.958429 kubelet[2545]: I0213 19:05:53.958334 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z7mmf" podStartSLOduration=27.968108939 podStartE2EDuration="35.958312148s" podCreationTimestamp="2025-02-13 19:05:18 +0000 UTC" firstStartedPulling="2025-02-13 19:05:39.085206176 +0000 UTC m=+22.587812840" lastFinishedPulling="2025-02-13 19:05:47.075409385 +0000 UTC m=+30.578016049" observedRunningTime="2025-02-13 19:05:47.501468267 +0000 UTC m=+31.004074943" watchObservedRunningTime="2025-02-13 19:05:53.958312148 +0000 UTC m=+37.460918812" Feb 13 19:05:53.958739 kubelet[2545]: I0213 19:05:53.958689 2545 topology_manager.go:215] "Topology Admit Handler" podUID="d24d540f-8391-4caa-844f-7468a7f7c20a" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:05:53.987310 kubelet[2545]: E0213 19:05:53.987257 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:54.043589 kubelet[2545]: I0213 19:05:54.043488 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d24d540f-8391-4caa-844f-7468a7f7c20a-data\") pod \"nfs-server-provisioner-0\" (UID: \"d24d540f-8391-4caa-844f-7468a7f7c20a\") " pod="default/nfs-server-provisioner-0" Feb 13 19:05:54.043589 kubelet[2545]: I0213 19:05:54.043566 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsl6x\" (UniqueName: \"kubernetes.io/projected/d24d540f-8391-4caa-844f-7468a7f7c20a-kube-api-access-vsl6x\") pod \"nfs-server-provisioner-0\" (UID: \"d24d540f-8391-4caa-844f-7468a7f7c20a\") " pod="default/nfs-server-provisioner-0" Feb 13 19:05:54.265875 containerd[2050]: time="2025-02-13T19:05:54.265720522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d24d540f-8391-4caa-844f-7468a7f7c20a,Namespace:default,Attempt:0,}" Feb 13 19:05:54.500390 systemd-networkd[1608]: cali60e51b789ff: Link UP Feb 13 19:05:54.503294 systemd-networkd[1608]: cali60e51b789ff: Gained carrier Feb 13 19:05:54.507314 (udev-worker)[4390]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.363 [INFO][4373] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.27.136-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d24d540f-8391-4caa-844f-7468a7f7c20a 1199 0 2025-02-13 19:05:53 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.27.136 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.363 [INFO][4373] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.416 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" HandleID="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Workload="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.437 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" HandleID="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Workload="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316960), Attrs:map[string]string{"namespace":"default", "node":"172.31.27.136", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:05:54.416135365 +0000 UTC"}, Hostname:"172.31.27.136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.438 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.438 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.438 [INFO][4384] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.27.136' Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.443 [INFO][4384] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.451 [INFO][4384] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.459 [INFO][4384] ipam/ipam.go 489: Trying affinity for 192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.462 [INFO][4384] ipam/ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.467 [INFO][4384] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.468 [INFO][4384] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.471 [INFO][4384] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82 Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.479 [INFO][4384] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.491 [INFO][4384] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.4.67/26] block=192.168.4.64/26 handle="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.491 [INFO][4384] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.67/26] handle="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" host="172.31.27.136" Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.491 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:05:54.530012 containerd[2050]: 2025-02-13 19:05:54.491 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.4.67/26] IPv6=[] ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" HandleID="k8s-pod-network.335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Workload="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.531154 containerd[2050]: 2025-02-13 19:05:54.495 [INFO][4373] cni-plugin/k8s.go 386: Populated endpoint ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d24d540f-8391-4caa-844f-7468a7f7c20a", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:05:54.531154 containerd[2050]: 2025-02-13 19:05:54.495 [INFO][4373] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.4.67/32] ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.531154 containerd[2050]: 2025-02-13 19:05:54.495 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.531154 containerd[2050]: 2025-02-13 19:05:54.501 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.531531 containerd[2050]: 2025-02-13 19:05:54.502 [INFO][4373] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d24d540f-8391-4caa-844f-7468a7f7c20a", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1e:ec:d5:79:f6:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:05:54.531531 containerd[2050]: 2025-02-13 19:05:54.518 [INFO][4373] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.27.136-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:05:54.574097 containerd[2050]: time="2025-02-13T19:05:54.573811905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:54.574097 containerd[2050]: time="2025-02-13T19:05:54.574031831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:54.574097 containerd[2050]: time="2025-02-13T19:05:54.574083961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:54.574580 containerd[2050]: time="2025-02-13T19:05:54.574305111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:54.676301 containerd[2050]: time="2025-02-13T19:05:54.676217298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d24d540f-8391-4caa-844f-7468a7f7c20a,Namespace:default,Attempt:0,} returns sandbox id \"335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82\"" Feb 13 19:05:54.680432 containerd[2050]: time="2025-02-13T19:05:54.680270524Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:05:54.987496 kubelet[2545]: E0213 19:05:54.987429 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:55.162789 systemd[1]: run-containerd-runc-k8s.io-335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82-runc.eH8tCM.mount: Deactivated successfully. Feb 13 19:05:55.988433 kubelet[2545]: E0213 19:05:55.988336 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:56.177448 systemd-networkd[1608]: cali60e51b789ff: Gained IPv6LL Feb 13 19:05:56.989326 kubelet[2545]: E0213 19:05:56.989162 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:57.329464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316606188.mount: Deactivated successfully. Feb 13 19:05:57.961231 kubelet[2545]: E0213 19:05:57.961162 2545 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:57.991204 kubelet[2545]: E0213 19:05:57.990982 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:58.250544 ntpd[2008]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:05:58.251343 ntpd[2008]: 13 Feb 19:05:58 ntpd[2008]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:05:58.992029 kubelet[2545]: E0213 19:05:58.991898 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:05:59.992778 kubelet[2545]: E0213 19:05:59.992704 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:00.247069 containerd[2050]: time="2025-02-13T19:06:00.246015597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:06:00.248616 containerd[2050]: time="2025-02-13T19:06:00.248528933Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 19:06:00.251296 containerd[2050]: time="2025-02-13T19:06:00.251216692Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:06:00.258129 containerd[2050]: time="2025-02-13T19:06:00.258061481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:06:00.260475 containerd[2050]: time="2025-02-13T19:06:00.260266743Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.57994236s" Feb 13 19:06:00.260475 containerd[2050]: time="2025-02-13T19:06:00.260327301Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:06:00.265249 containerd[2050]: time="2025-02-13T19:06:00.265180006Z" level=info msg="CreateContainer within sandbox \"335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:06:00.292780 containerd[2050]: time="2025-02-13T19:06:00.292705214Z" level=info msg="CreateContainer within sandbox \"335c4e42ffc836646b8a7e19d03724c77094772383cfa196da1db48ff00b5f82\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9039dde94fb80e208e168ba726ec80ff5f636824e63be3d6475ada2490674d22\"" Feb 13 19:06:00.293804 containerd[2050]: time="2025-02-13T19:06:00.293738799Z" level=info msg="StartContainer for \"9039dde94fb80e208e168ba726ec80ff5f636824e63be3d6475ada2490674d22\"" Feb 13 19:06:00.392464 containerd[2050]: time="2025-02-13T19:06:00.392388680Z" level=info msg="StartContainer for \"9039dde94fb80e208e168ba726ec80ff5f636824e63be3d6475ada2490674d22\" returns successfully" Feb 13 19:06:00.543238 kubelet[2545]: I0213 19:06:00.543054 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.959767685 podStartE2EDuration="7.543032885s" podCreationTimestamp="2025-02-13 19:05:53 +0000 UTC" firstStartedPulling="2025-02-13 19:05:54.679275407 +0000 UTC m=+38.181882071" lastFinishedPulling="2025-02-13 19:06:00.262540607 +0000 UTC m=+43.765147271" observedRunningTime="2025-02-13 19:06:00.542645499 +0000 UTC m=+44.045252188" watchObservedRunningTime="2025-02-13 19:06:00.543032885 +0000 UTC m=+44.045639573" Feb 13 19:06:00.993730 kubelet[2545]: E0213 19:06:00.993665 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:01.994721 kubelet[2545]: E0213 19:06:01.994664 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:02.995810 kubelet[2545]: E0213 19:06:02.995744 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:03.996895 kubelet[2545]: E0213 19:06:03.996817 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:04.998083 kubelet[2545]: E0213 19:06:04.998008 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:05.998848 kubelet[2545]: E0213 19:06:05.998771 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:06.999640 kubelet[2545]: E0213 19:06:06.999574 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:08.000474 kubelet[2545]: E0213 19:06:08.000408 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:09.001097 kubelet[2545]: E0213 19:06:09.001023 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:10.001663 kubelet[2545]: E0213 19:06:10.001588 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:11.002422 kubelet[2545]: E0213 19:06:11.002358 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:12.003494 kubelet[2545]: E0213 19:06:12.003402 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:13.004424 kubelet[2545]: E0213 19:06:13.004368 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:14.005471 kubelet[2545]: E0213 19:06:14.005395 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:15.006554 kubelet[2545]: E0213 19:06:15.006493 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:16.007405 kubelet[2545]: E0213 19:06:16.007342 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:17.007667 kubelet[2545]: E0213 19:06:17.007601 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:17.961126 kubelet[2545]: E0213 19:06:17.961069 2545 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:17.984614 containerd[2050]: time="2025-02-13T19:06:17.984434169Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:06:17.984614 containerd[2050]: time="2025-02-13T19:06:17.984606131Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:06:17.985743 containerd[2050]: time="2025-02-13T19:06:17.984628522Z" level=info msg="StopPodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:06:17.986437 containerd[2050]: time="2025-02-13T19:06:17.986347950Z" level=info msg="RemovePodSandbox for \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:06:17.986437 containerd[2050]: time="2025-02-13T19:06:17.986398831Z" level=info msg="Forcibly stopping sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\"" Feb 13 19:06:17.986652 containerd[2050]: time="2025-02-13T19:06:17.986525566Z" level=info msg="TearDown network for sandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" successfully" Feb 13 19:06:17.994883 containerd[2050]: time="2025-02-13T19:06:17.994802059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:17.995062 containerd[2050]: time="2025-02-13T19:06:17.994895153Z" level=info msg="RemovePodSandbox \"a535399bd58a0d19d2bda9d8c844eaac59c78d915ec71ad3baf8adab1f0457ca\" returns successfully" Feb 13 19:06:17.996054 containerd[2050]: time="2025-02-13T19:06:17.995566901Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:06:17.996054 containerd[2050]: time="2025-02-13T19:06:17.995720758Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:06:17.996054 containerd[2050]: time="2025-02-13T19:06:17.995745226Z" level=info msg="StopPodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:06:17.996291 containerd[2050]: time="2025-02-13T19:06:17.996220231Z" level=info msg="RemovePodSandbox for \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:06:17.996291 containerd[2050]: time="2025-02-13T19:06:17.996261856Z" level=info msg="Forcibly stopping sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\"" Feb 13 19:06:17.996403 containerd[2050]: time="2025-02-13T19:06:17.996374232Z" level=info msg="TearDown network for sandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" successfully" Feb 13 19:06:18.002282 containerd[2050]: time="2025-02-13T19:06:18.002214299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.002509 containerd[2050]: time="2025-02-13T19:06:18.002297200Z" level=info msg="RemovePodSandbox \"536ca3ff62fbc9849358dee76c81ae28c32aab39033df06e81cb4a543952ec49\" returns successfully" Feb 13 19:06:18.003241 containerd[2050]: time="2025-02-13T19:06:18.002884870Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:06:18.003241 containerd[2050]: time="2025-02-13T19:06:18.003067529Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:06:18.003241 containerd[2050]: time="2025-02-13T19:06:18.003089512Z" level=info msg="StopPodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:06:18.003813 containerd[2050]: time="2025-02-13T19:06:18.003779317Z" level=info msg="RemovePodSandbox for \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:06:18.004732 containerd[2050]: time="2025-02-13T19:06:18.003911107Z" level=info msg="Forcibly stopping sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\"" Feb 13 19:06:18.004732 containerd[2050]: time="2025-02-13T19:06:18.004070246Z" level=info msg="TearDown network for sandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" successfully" Feb 13 19:06:18.008097 kubelet[2545]: E0213 19:06:18.008038 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:18.009382 containerd[2050]: time="2025-02-13T19:06:18.009323292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.009477 containerd[2050]: time="2025-02-13T19:06:18.009406901Z" level=info msg="RemovePodSandbox \"46ece6a00819a6ca97cb31c2fccf97ee2ef445ad1be3d98331a35701d469e20f\" returns successfully" Feb 13 19:06:18.010256 containerd[2050]: time="2025-02-13T19:06:18.009978639Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:06:18.010256 containerd[2050]: time="2025-02-13T19:06:18.010134021Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:06:18.010256 containerd[2050]: time="2025-02-13T19:06:18.010155728Z" level=info msg="StopPodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:06:18.011565 containerd[2050]: time="2025-02-13T19:06:18.010698663Z" level=info msg="RemovePodSandbox for \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:06:18.011565 containerd[2050]: time="2025-02-13T19:06:18.010744634Z" level=info msg="Forcibly stopping sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\"" Feb 13 19:06:18.011565 containerd[2050]: time="2025-02-13T19:06:18.010863806Z" level=info msg="TearDown network for sandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" successfully" Feb 13 19:06:18.016348 containerd[2050]: time="2025-02-13T19:06:18.016280156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.016456 containerd[2050]: time="2025-02-13T19:06:18.016357751Z" level=info msg="RemovePodSandbox \"f59dca89eea72064a743de522a6e2113aa8776ebbf1ed59957bc9342126f230a\" returns successfully" Feb 13 19:06:18.017606 containerd[2050]: time="2025-02-13T19:06:18.017348774Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:06:18.017606 containerd[2050]: time="2025-02-13T19:06:18.017506881Z" level=info msg="TearDown network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" successfully" Feb 13 19:06:18.017606 containerd[2050]: time="2025-02-13T19:06:18.017527712Z" level=info msg="StopPodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" returns successfully" Feb 13 19:06:18.018191 containerd[2050]: time="2025-02-13T19:06:18.018150115Z" level=info msg="RemovePodSandbox for \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:06:18.018262 containerd[2050]: time="2025-02-13T19:06:18.018223099Z" level=info msg="Forcibly stopping sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\"" Feb 13 19:06:18.018375 containerd[2050]: time="2025-02-13T19:06:18.018340854Z" level=info msg="TearDown network for sandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" successfully" Feb 13 19:06:18.023696 containerd[2050]: time="2025-02-13T19:06:18.023628753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.023817 containerd[2050]: time="2025-02-13T19:06:18.023714908Z" level=info msg="RemovePodSandbox \"97c70055d6fc80c56fb75af71054ee53e9f061de5dad050eb6e094d52ce416a9\" returns successfully" Feb 13 19:06:18.024403 containerd[2050]: time="2025-02-13T19:06:18.024364685Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" Feb 13 19:06:18.024806 containerd[2050]: time="2025-02-13T19:06:18.024670357Z" level=info msg="TearDown network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" successfully" Feb 13 19:06:18.024806 containerd[2050]: time="2025-02-13T19:06:18.024697467Z" level=info msg="StopPodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" returns successfully" Feb 13 19:06:18.025524 containerd[2050]: time="2025-02-13T19:06:18.025484508Z" level=info msg="RemovePodSandbox for \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" Feb 13 19:06:18.025655 containerd[2050]: time="2025-02-13T19:06:18.025531560Z" level=info msg="Forcibly stopping sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\"" Feb 13 19:06:18.025711 containerd[2050]: time="2025-02-13T19:06:18.025649231Z" level=info msg="TearDown network for sandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" successfully" Feb 13 19:06:18.031112 containerd[2050]: time="2025-02-13T19:06:18.031042458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.031430 containerd[2050]: time="2025-02-13T19:06:18.031119657Z" level=info msg="RemovePodSandbox \"d216f17cbe00871d7679d01d47e29749ba87766a09dc8bc2e29d21b6f98570bd\" returns successfully" Feb 13 19:06:18.031792 containerd[2050]: time="2025-02-13T19:06:18.031723762Z" level=info msg="StopPodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\"" Feb 13 19:06:18.031907 containerd[2050]: time="2025-02-13T19:06:18.031887008Z" level=info msg="TearDown network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" successfully" Feb 13 19:06:18.032010 containerd[2050]: time="2025-02-13T19:06:18.031910648Z" level=info msg="StopPodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" returns successfully" Feb 13 19:06:18.032657 containerd[2050]: time="2025-02-13T19:06:18.032421983Z" level=info msg="RemovePodSandbox for \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\"" Feb 13 19:06:18.032657 containerd[2050]: time="2025-02-13T19:06:18.032463392Z" level=info msg="Forcibly stopping sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\"" Feb 13 19:06:18.032657 containerd[2050]: time="2025-02-13T19:06:18.032605603Z" level=info msg="TearDown network for sandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" successfully" Feb 13 19:06:18.038213 containerd[2050]: time="2025-02-13T19:06:18.038130908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.038213 containerd[2050]: time="2025-02-13T19:06:18.038215275Z" level=info msg="RemovePodSandbox \"21fe237dd58846bf69a2901bf10d6778689ef98b82cf93fc6c6cffaaef303a32\" returns successfully" Feb 13 19:06:18.039530 containerd[2050]: time="2025-02-13T19:06:18.039013698Z" level=info msg="StopPodSandbox for \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\"" Feb 13 19:06:18.039530 containerd[2050]: time="2025-02-13T19:06:18.039173377Z" level=info msg="TearDown network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\" successfully" Feb 13 19:06:18.039530 containerd[2050]: time="2025-02-13T19:06:18.039194664Z" level=info msg="StopPodSandbox for \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\" returns successfully" Feb 13 19:06:18.040993 containerd[2050]: time="2025-02-13T19:06:18.040098344Z" level=info msg="RemovePodSandbox for \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\"" Feb 13 19:06:18.040993 containerd[2050]: time="2025-02-13T19:06:18.040170740Z" level=info msg="Forcibly stopping sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\"" Feb 13 19:06:18.040993 containerd[2050]: time="2025-02-13T19:06:18.040312591Z" level=info msg="TearDown network for sandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\" successfully" Feb 13 19:06:18.046268 containerd[2050]: time="2025-02-13T19:06:18.046215293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.046586 containerd[2050]: time="2025-02-13T19:06:18.046531999Z" level=info msg="RemovePodSandbox \"e4c01b5f78db784277347ffa5707c3966e2a1437a64e012d4988c57067221931\" returns successfully" Feb 13 19:06:18.047329 containerd[2050]: time="2025-02-13T19:06:18.047251867Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:06:18.047467 containerd[2050]: time="2025-02-13T19:06:18.047432293Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:06:18.047554 containerd[2050]: time="2025-02-13T19:06:18.047464553Z" level=info msg="StopPodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:06:18.049025 containerd[2050]: time="2025-02-13T19:06:18.048046868Z" level=info msg="RemovePodSandbox for \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:06:18.049025 containerd[2050]: time="2025-02-13T19:06:18.048096525Z" level=info msg="Forcibly stopping sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\"" Feb 13 19:06:18.049025 containerd[2050]: time="2025-02-13T19:06:18.048253707Z" level=info msg="TearDown network for sandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" successfully" Feb 13 19:06:18.056354 containerd[2050]: time="2025-02-13T19:06:18.056136235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.056354 containerd[2050]: time="2025-02-13T19:06:18.056223026Z" level=info msg="RemovePodSandbox \"f0f59db1009a5fab1b605bb4c11fe76b3773a2d4ca2cd9b6faa5a0f40f29ff2a\" returns successfully" Feb 13 19:06:18.059300 containerd[2050]: time="2025-02-13T19:06:18.059223229Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:06:18.059463 containerd[2050]: time="2025-02-13T19:06:18.059418507Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:06:18.059463 containerd[2050]: time="2025-02-13T19:06:18.059444608Z" level=info msg="StopPodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:06:18.060131 containerd[2050]: time="2025-02-13T19:06:18.060088154Z" level=info msg="RemovePodSandbox for \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:06:18.060235 containerd[2050]: time="2025-02-13T19:06:18.060135878Z" level=info msg="Forcibly stopping sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\"" Feb 13 19:06:18.060290 containerd[2050]: time="2025-02-13T19:06:18.060263117Z" level=info msg="TearDown network for sandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" successfully" Feb 13 19:06:18.065911 containerd[2050]: time="2025-02-13T19:06:18.065839028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.066096 containerd[2050]: time="2025-02-13T19:06:18.065919108Z" level=info msg="RemovePodSandbox \"68e18aa1bcad6bf57b5d7144cbfef5ffe1da1b0c31153e7c53fa504873860ee6\" returns successfully" Feb 13 19:06:18.066864 containerd[2050]: time="2025-02-13T19:06:18.066607160Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:06:18.066864 containerd[2050]: time="2025-02-13T19:06:18.066762649Z" level=info msg="TearDown network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" successfully" Feb 13 19:06:18.066864 containerd[2050]: time="2025-02-13T19:06:18.066783324Z" level=info msg="StopPodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" returns successfully" Feb 13 19:06:18.067655 containerd[2050]: time="2025-02-13T19:06:18.067598363Z" level=info msg="RemovePodSandbox for \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:06:18.067655 containerd[2050]: time="2025-02-13T19:06:18.067650781Z" level=info msg="Forcibly stopping sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\"" Feb 13 19:06:18.067796 containerd[2050]: time="2025-02-13T19:06:18.067774215Z" level=info msg="TearDown network for sandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" successfully" Feb 13 19:06:18.073524 containerd[2050]: time="2025-02-13T19:06:18.073235756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.073524 containerd[2050]: time="2025-02-13T19:06:18.073318742Z" level=info msg="RemovePodSandbox \"68f894a568465ff40e960ddd6b881bccb1201ffc0f7afd0917d83114a14c8563\" returns successfully" Feb 13 19:06:18.075714 containerd[2050]: time="2025-02-13T19:06:18.075373641Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" Feb 13 19:06:18.075714 containerd[2050]: time="2025-02-13T19:06:18.075539708Z" level=info msg="TearDown network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" successfully" Feb 13 19:06:18.075714 containerd[2050]: time="2025-02-13T19:06:18.075560070Z" level=info msg="StopPodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" returns successfully" Feb 13 19:06:18.076695 containerd[2050]: time="2025-02-13T19:06:18.076195247Z" level=info msg="RemovePodSandbox for \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" Feb 13 19:06:18.076695 containerd[2050]: time="2025-02-13T19:06:18.076241362Z" level=info msg="Forcibly stopping sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\"" Feb 13 19:06:18.076695 containerd[2050]: time="2025-02-13T19:06:18.076362203Z" level=info msg="TearDown network for sandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" successfully" Feb 13 19:06:18.109233 containerd[2050]: time="2025-02-13T19:06:18.109177771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.110433 containerd[2050]: time="2025-02-13T19:06:18.109513062Z" level=info msg="RemovePodSandbox \"c4f4a965f45eab535bcadfba1ec6fc51db0c0b32bf00cafde8a3bf5c9858b1ea\" returns successfully" Feb 13 19:06:18.110433 containerd[2050]: time="2025-02-13T19:06:18.110164652Z" level=info msg="StopPodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\"" Feb 13 19:06:18.110433 containerd[2050]: time="2025-02-13T19:06:18.110319193Z" level=info msg="TearDown network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" successfully" Feb 13 19:06:18.110433 containerd[2050]: time="2025-02-13T19:06:18.110341524Z" level=info msg="StopPodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" returns successfully" Feb 13 19:06:18.111424 containerd[2050]: time="2025-02-13T19:06:18.111383549Z" level=info msg="RemovePodSandbox for \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\"" Feb 13 19:06:18.112096 containerd[2050]: time="2025-02-13T19:06:18.111594795Z" level=info msg="Forcibly stopping sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\"" Feb 13 19:06:18.112096 containerd[2050]: time="2025-02-13T19:06:18.111721014Z" level=info msg="TearDown network for sandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" successfully" Feb 13 19:06:18.119900 containerd[2050]: time="2025-02-13T19:06:18.119797654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.120268 containerd[2050]: time="2025-02-13T19:06:18.120139537Z" level=info msg="RemovePodSandbox \"f38a71ca5bf46f08fac76941a8eb6255f6fb40fffd3dc269a116ac4cf0c3a480\" returns successfully" Feb 13 19:06:18.121530 containerd[2050]: time="2025-02-13T19:06:18.120893009Z" level=info msg="StopPodSandbox for \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\"" Feb 13 19:06:18.121530 containerd[2050]: time="2025-02-13T19:06:18.121122312Z" level=info msg="TearDown network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\" successfully" Feb 13 19:06:18.121530 containerd[2050]: time="2025-02-13T19:06:18.121145628Z" level=info msg="StopPodSandbox for \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\" returns successfully" Feb 13 19:06:18.121863 containerd[2050]: time="2025-02-13T19:06:18.121810280Z" level=info msg="RemovePodSandbox for \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\"" Feb 13 19:06:18.121863 containerd[2050]: time="2025-02-13T19:06:18.121852421Z" level=info msg="Forcibly stopping sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\"" Feb 13 19:06:18.122054 containerd[2050]: time="2025-02-13T19:06:18.122002916Z" level=info msg="TearDown network for sandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\" successfully" Feb 13 19:06:18.127676 containerd[2050]: time="2025-02-13T19:06:18.127579943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:18.127790 containerd[2050]: time="2025-02-13T19:06:18.127729274Z" level=info msg="RemovePodSandbox \"46223c5acf75a4475f3580a8ba82da23a088cda7216ef07722cec08796ca85e4\" returns successfully" Feb 13 19:06:19.009085 kubelet[2545]: E0213 19:06:19.009021 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:20.009423 kubelet[2545]: E0213 19:06:20.009369 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:21.010370 kubelet[2545]: E0213 19:06:21.010292 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:22.011376 kubelet[2545]: E0213 19:06:22.011313 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:23.011740 kubelet[2545]: E0213 19:06:23.011665 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:24.012101 kubelet[2545]: E0213 19:06:24.012035 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:25.012935 kubelet[2545]: E0213 19:06:25.012873 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:25.763849 kubelet[2545]: I0213 19:06:25.763712 2545 topology_manager.go:215] "Topology Admit Handler" podUID="7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd" podNamespace="default" podName="test-pod-1" Feb 13 19:06:25.946892 kubelet[2545]: I0213 19:06:25.946839 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhc4v\" (UniqueName: \"kubernetes.io/projected/7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd-kube-api-access-jhc4v\") pod \"test-pod-1\" (UID: \"7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd\") " pod="default/test-pod-1" Feb 13 19:06:25.947065 kubelet[2545]: I0213 19:06:25.946907 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f0e4a782-29b2-41e6-a311-16a64515cb1f\" (UniqueName: \"kubernetes.io/nfs/7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd-pvc-f0e4a782-29b2-41e6-a311-16a64515cb1f\") pod \"test-pod-1\" (UID: \"7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd\") " pod="default/test-pod-1" Feb 13 19:06:26.013350 kubelet[2545]: E0213 19:06:26.013251 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:26.085008 kernel: FS-Cache: Loaded Feb 13 19:06:26.131726 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:06:26.131849 kernel: RPC: Registered udp transport module. Feb 13 19:06:26.131882 kernel: RPC: Registered tcp transport module. Feb 13 19:06:26.132618 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:06:26.134853 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:06:26.478690 kernel: NFS: Registering the id_resolver key type Feb 13 19:06:26.478789 kernel: Key type id_resolver registered Feb 13 19:06:26.479633 kernel: Key type id_legacy registered Feb 13 19:06:26.520849 nfsidmap[4605]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:06:26.527252 nfsidmap[4606]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:06:26.671126 containerd[2050]: time="2025-02-13T19:06:26.671042375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd,Namespace:default,Attempt:0,}" Feb 13 19:06:26.858358 (udev-worker)[4594]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:06:26.860816 systemd-networkd[1608]: cali5ec59c6bf6e: Link UP Feb 13 19:06:26.862620 systemd-networkd[1608]: cali5ec59c6bf6e: Gained carrier Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.747 [INFO][4607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.27.136-k8s-test--pod--1-eth0 default 7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd 1307 0 2025-02-13 19:05:54 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.27.136 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.747 [INFO][4607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.792 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" HandleID="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Workload="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.809 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" HandleID="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Workload="172.31.27.136-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000290b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.27.136", "pod":"test-pod-1", "timestamp":"2025-02-13 19:06:26.792799184 +0000 UTC"}, Hostname:"172.31.27.136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.809 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.809 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.809 [INFO][4618] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.27.136' Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.812 [INFO][4618] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.820 [INFO][4618] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.827 [INFO][4618] ipam/ipam.go 489: Trying affinity for 192.168.4.64/26 host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.829 [INFO][4618] ipam/ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.833 [INFO][4618] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.833 [INFO][4618] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.836 [INFO][4618] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.841 [INFO][4618] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.851 [INFO][4618] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.4.68/26] block=192.168.4.64/26 handle="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.851 [INFO][4618] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.68/26] handle="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" host="172.31.27.136" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.851 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.851 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.4.68/26] IPv6=[] ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" HandleID="k8s-pod-network.40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Workload="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.884259 containerd[2050]: 2025-02-13 19:06:26.854 [INFO][4607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:06:26.886232 containerd[2050]: 2025-02-13 19:06:26.855 [INFO][4607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.4.68/32] ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.886232 containerd[2050]: 2025-02-13 19:06:26.855 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.886232 containerd[2050]: 2025-02-13 19:06:26.863 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.886232 containerd[2050]: 2025-02-13 19:06:26.863 [INFO][4607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.136-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.27.136", ContainerID:"40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d6:45:c3:88:e6:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:06:26.886232 containerd[2050]: 2025-02-13 19:06:26.879 [INFO][4607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.27.136-k8s-test--pod--1-eth0" Feb 13 19:06:26.924421 containerd[2050]: time="2025-02-13T19:06:26.924292174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:06:26.924639 containerd[2050]: time="2025-02-13T19:06:26.924597787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:06:26.924777 containerd[2050]: time="2025-02-13T19:06:26.924737621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:06:26.925093 containerd[2050]: time="2025-02-13T19:06:26.925048420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:06:27.006212 containerd[2050]: time="2025-02-13T19:06:27.006134868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7e2b0b76-9e00-40dc-8fdc-bc1eba6341dd,Namespace:default,Attempt:0,} returns sandbox id \"40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b\"" Feb 13 19:06:27.009572 containerd[2050]: time="2025-02-13T19:06:27.009528244Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:06:27.013460 kubelet[2545]: E0213 19:06:27.013352 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:27.378355 containerd[2050]: time="2025-02-13T19:06:27.378270878Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:06:27.380436 containerd[2050]: time="2025-02-13T19:06:27.380351661Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:06:27.386254 containerd[2050]: time="2025-02-13T19:06:27.386191440Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 376.370226ms" Feb 13 19:06:27.386254 containerd[2050]: time="2025-02-13T19:06:27.386248588Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:06:27.390566 containerd[2050]: time="2025-02-13T19:06:27.390360259Z" level=info msg="CreateContainer within sandbox \"40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:06:27.419053 containerd[2050]: time="2025-02-13T19:06:27.418942920Z" level=info msg="CreateContainer within sandbox \"40fe1628a8a05a78de72f9a124d19b94efb60684a228530797556eaeb22ee50b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3115b47638974531317a9d7a0d1e04f5b19c15ddf045097510dade422852a929\"" Feb 13 19:06:27.420433 containerd[2050]: time="2025-02-13T19:06:27.420295155Z" level=info msg="StartContainer for \"3115b47638974531317a9d7a0d1e04f5b19c15ddf045097510dade422852a929\"" Feb 13 19:06:27.526447 containerd[2050]: time="2025-02-13T19:06:27.526269249Z" level=info msg="StartContainer for \"3115b47638974531317a9d7a0d1e04f5b19c15ddf045097510dade422852a929\" returns successfully" Feb 13 19:06:28.013734 kubelet[2545]: E0213 19:06:28.013679 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:28.066713 systemd[1]: run-containerd-runc-k8s.io-3115b47638974531317a9d7a0d1e04f5b19c15ddf045097510dade422852a929-runc.QbrXBo.mount: Deactivated successfully. Feb 13 19:06:28.881670 systemd-networkd[1608]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:06:29.013860 kubelet[2545]: E0213 19:06:29.013797 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:30.014780 kubelet[2545]: E0213 19:06:30.014718 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:31.015711 kubelet[2545]: E0213 19:06:31.015647 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:31.250377 ntpd[2008]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:06:31.250920 ntpd[2008]: 13 Feb 19:06:31 ntpd[2008]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:06:32.016088 kubelet[2545]: E0213 19:06:32.016012 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:32.388878 systemd[1]: run-containerd-runc-k8s.io-14dd8f37c4d5c0d800a016f2b8dfa2f464a584f42c32890f6d76da62ba48c199-runc.DTGxjk.mount: Deactivated successfully. Feb 13 19:06:33.016690 kubelet[2545]: E0213 19:06:33.016612 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:34.016991 kubelet[2545]: E0213 19:06:34.016899 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:35.017794 kubelet[2545]: E0213 19:06:35.017721 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:36.018492 kubelet[2545]: E0213 19:06:36.018417 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:37.019044 kubelet[2545]: E0213 19:06:37.018883 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:37.960846 kubelet[2545]: E0213 19:06:37.960786 2545 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:38.019353 kubelet[2545]: E0213 19:06:38.019300 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:39.020022 kubelet[2545]: E0213 19:06:39.019936 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:40.020817 kubelet[2545]: E0213 19:06:40.020745 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:41.021668 kubelet[2545]: E0213 19:06:41.021602 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:42.022632 kubelet[2545]: E0213 19:06:42.022565 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:43.023036 kubelet[2545]: E0213 19:06:43.022948 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:44.024146 kubelet[2545]: E0213 19:06:44.024079 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:45.024329 kubelet[2545]: E0213 19:06:45.024244 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:46.025484 kubelet[2545]: E0213 19:06:46.025423 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:47.026113 kubelet[2545]: E0213 19:06:47.026050 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:48.027094 kubelet[2545]: E0213 19:06:48.027022 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:49.027759 kubelet[2545]: E0213 19:06:49.027686 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:50.028534 kubelet[2545]: E0213 19:06:50.028479 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:50.531535 kubelet[2545]: E0213 19:06:50.531322 2545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:06:51.029300 kubelet[2545]: E0213 19:06:51.029228 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:52.029766 kubelet[2545]: E0213 19:06:52.029697 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:53.030247 kubelet[2545]: E0213 19:06:53.030191 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:54.030852 kubelet[2545]: E0213 19:06:54.030796 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:55.031977 kubelet[2545]: E0213 19:06:55.031892 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:56.032797 kubelet[2545]: E0213 19:06:56.032730 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:57.033605 kubelet[2545]: E0213 19:06:57.033534 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:57.961226 kubelet[2545]: E0213 19:06:57.961174 2545 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:58.034675 kubelet[2545]: E0213 19:06:58.034612 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:06:59.035365 kubelet[2545]: E0213 19:06:59.035301 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:00.036201 kubelet[2545]: E0213 19:07:00.036135 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:00.532256 kubelet[2545]: E0213 19:07:00.532086 2545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:07:01.036846 kubelet[2545]: E0213 19:07:01.036781 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:02.037627 kubelet[2545]: E0213 19:07:02.037544 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:03.038100 kubelet[2545]: E0213 19:07:03.038026 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:03.266864 kubelet[2545]: E0213 19:07:03.266070 2545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": unexpected EOF" Feb 13 19:07:03.266864 kubelet[2545]: E0213 19:07:03.266894 2545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" Feb 13 19:07:03.271045 kubelet[2545]: E0213 19:07:03.268790 2545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" Feb 13 19:07:03.271045 kubelet[2545]: I0213 19:07:03.268856 2545 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 19:07:03.271731 kubelet[2545]: E0213 19:07:03.271638 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" interval="200ms" Feb 13 19:07:03.473591 kubelet[2545]: E0213 19:07:03.473526 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" interval="400ms" Feb 13 19:07:03.874675 kubelet[2545]: E0213 19:07:03.874524 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" interval="800ms" Feb 13 19:07:04.038389 kubelet[2545]: E0213 19:07:04.038329 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:04.675746 kubelet[2545]: E0213 19:07:04.675680 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" interval="1.6s" Feb 13 19:07:05.038811 kubelet[2545]: E0213 19:07:05.038750 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:06.039769 kubelet[2545]: E0213 19:07:06.039694 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:06.276544 kubelet[2545]: E0213 19:07:06.276457 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.16.173:6443: connect: connection refused" interval="3.2s" Feb 13 19:07:07.040410 kubelet[2545]: E0213 19:07:07.040349 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:08.041344 kubelet[2545]: E0213 19:07:08.041280 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:09.043162 kubelet[2545]: E0213 19:07:09.043054 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:10.043282 kubelet[2545]: E0213 19:07:10.043168 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:11.043687 kubelet[2545]: E0213 19:07:11.043624 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:12.043797 kubelet[2545]: E0213 19:07:12.043735 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:13.044146 kubelet[2545]: E0213 19:07:13.044085 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:14.044328 kubelet[2545]: E0213 19:07:14.044244 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:15.044608 kubelet[2545]: E0213 19:07:15.044546 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:16.045240 kubelet[2545]: E0213 19:07:16.045177 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:17.046115 kubelet[2545]: E0213 19:07:17.046042 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:17.961484 kubelet[2545]: E0213 19:07:17.961410 2545 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:18.046289 kubelet[2545]: E0213 19:07:18.046197 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:19.046433 kubelet[2545]: E0213 19:07:19.046368 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:19.477722 kubelet[2545]: E0213 19:07:19.477649 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 13 19:07:20.047064 kubelet[2545]: E0213 19:07:20.047000 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:20.310471 kubelet[2545]: E0213 19:07:20.310306 2545 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.27.136\": Get \"https://172.31.16.173:6443/api/v1/nodes/172.31.27.136?resourceVersion=0&timeout=10s\": dial tcp 172.31.16.173:6443: i/o timeout" Feb 13 19:07:21.047514 kubelet[2545]: E0213 19:07:21.047438 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:22.048306 kubelet[2545]: E0213 19:07:22.048237 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:23.048473 kubelet[2545]: E0213 19:07:23.048388 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:24.049165 kubelet[2545]: E0213 19:07:24.049080 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:07:25.050075 kubelet[2545]: E0213 19:07:25.050011 2545 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"