Feb 13 19:50:00.219495 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:50:00.219544 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:50:00.219571 kernel: KASLR disabled due to lack of seed Feb 13 19:50:00.219588 kernel: efi: EFI v2.7 by EDK II Feb 13 19:50:00.219604 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:50:00.219620 kernel: ACPI: Early table checksum verification disabled Feb 13 19:50:00.219638 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:50:00.219654 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:50:00.219671 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:50:00.219687 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:50:00.219708 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:50:00.219725 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:50:00.219780 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:50:00.219807 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:50:00.219827 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:50:00.219852 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:50:00.219871 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:50:00.219888 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:50:00.219904 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:50:00.219921 kernel: printk: bootconsole [uart0] enabled Feb 13 19:50:00.219937 kernel: NUMA: Failed to initialise from firmware Feb 13 19:50:00.219954 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:50:00.219971 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:50:00.219987 kernel: Zone ranges: Feb 13 19:50:00.220004 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:50:00.220020 kernel: DMA32 empty Feb 13 19:50:00.220041 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:50:00.220058 kernel: Movable zone start for each node Feb 13 19:50:00.220075 kernel: Early memory node ranges Feb 13 19:50:00.220092 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:50:00.220108 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:50:00.220124 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:50:00.220141 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:50:00.220157 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:50:00.220175 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:50:00.220192 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:50:00.220209 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:50:00.220226 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:50:00.220248 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:50:00.220265 kernel: psci: probing for conduit method from ACPI. Feb 13 19:50:00.220289 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:50:00.220306 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:50:00.220324 kernel: psci: Trusted OS migration not required Feb 13 19:50:00.220345 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:50:00.220363 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:50:00.220380 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:50:00.220397 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:50:00.220415 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:50:00.220433 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:50:00.220450 kernel: CPU features: detected: Spectre-v2 Feb 13 19:50:00.220467 kernel: CPU features: detected: Spectre-v3a Feb 13 19:50:00.220484 kernel: CPU features: detected: Spectre-BHB Feb 13 19:50:00.220502 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:50:00.220520 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:50:00.220542 kernel: alternatives: applying boot alternatives Feb 13 19:50:00.220562 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:50:00.220581 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:50:00.220599 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:50:00.220617 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:50:00.220634 kernel: Fallback order for Node 0: 0 Feb 13 19:50:00.220652 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:50:00.220669 kernel: Policy zone: Normal Feb 13 19:50:00.220687 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:50:00.220705 kernel: software IO TLB: area num 2. Feb 13 19:50:00.220722 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:50:00.222807 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:50:00.222850 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:50:00.222868 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:50:00.222888 kernel: rcu: RCU event tracing is enabled. Feb 13 19:50:00.222907 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:50:00.222926 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:50:00.222944 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:50:00.222962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:50:00.222981 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:50:00.222999 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:50:00.223016 kernel: GICv3: 96 SPIs implemented Feb 13 19:50:00.223047 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:50:00.223066 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:50:00.223085 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:50:00.223102 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:50:00.223120 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:50:00.223139 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:50:00.223157 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:50:00.223175 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:50:00.223193 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:50:00.223211 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:50:00.223230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:50:00.223247 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:50:00.223272 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:50:00.223290 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:50:00.223308 kernel: Console: colour dummy device 80x25 Feb 13 19:50:00.223328 kernel: printk: console [tty1] enabled Feb 13 19:50:00.223347 kernel: ACPI: Core revision 20230628 Feb 13 19:50:00.223368 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:50:00.223387 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:50:00.223405 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:50:00.223424 kernel: landlock: Up and running. Feb 13 19:50:00.223449 kernel: SELinux: Initializing. Feb 13 19:50:00.223468 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:50:00.223486 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:50:00.223505 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:00.223522 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:00.223541 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:50:00.223560 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:50:00.223578 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:50:00.223597 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:50:00.223619 kernel: Remapping and enabling EFI services. Feb 13 19:50:00.223638 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:50:00.223656 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:50:00.223674 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:50:00.223692 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:50:00.223710 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:50:00.223728 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:50:00.223787 kernel: SMP: Total of 2 processors activated. Feb 13 19:50:00.223814 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:50:00.223841 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:50:00.223859 kernel: CPU features: detected: CRC32 instructions Feb 13 19:50:00.223878 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:50:00.223911 kernel: alternatives: applying system-wide alternatives Feb 13 19:50:00.223934 kernel: devtmpfs: initialized Feb 13 19:50:00.223953 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:50:00.223972 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:50:00.223991 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:50:00.224009 kernel: SMBIOS 3.0.0 present. Feb 13 19:50:00.224028 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:50:00.224051 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:50:00.224071 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:50:00.224090 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:50:00.224109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:50:00.224128 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:50:00.224146 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Feb 13 19:50:00.224165 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:50:00.224188 kernel: cpuidle: using governor menu Feb 13 19:50:00.224207 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:50:00.224226 kernel: ASID allocator initialised with 65536 entries Feb 13 19:50:00.224244 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:50:00.224263 kernel: Serial: AMBA PL011 UART driver Feb 13 19:50:00.224281 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:50:00.224301 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:50:00.224319 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:50:00.224338 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:50:00.224362 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:50:00.224381 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:50:00.224400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:50:00.224418 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:50:00.224437 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:50:00.224456 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:50:00.224474 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:50:00.224493 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:50:00.224511 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:50:00.224534 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:50:00.224554 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:50:00.224572 kernel: ACPI: Interpreter enabled Feb 13 19:50:00.224591 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:50:00.224609 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:50:00.224629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:50:00.225096 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:50:00.225377 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:50:00.225619 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:50:00.225925 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:50:00.226150 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:50:00.226182 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:50:00.226204 kernel: acpiphp: Slot [1] registered Feb 13 19:50:00.226224 kernel: acpiphp: Slot [2] registered Feb 13 19:50:00.226243 kernel: acpiphp: Slot [3] registered Feb 13 19:50:00.226262 kernel: acpiphp: Slot [4] registered Feb 13 19:50:00.226293 kernel: acpiphp: Slot [5] registered Feb 13 19:50:00.226313 kernel: acpiphp: Slot [6] registered Feb 13 19:50:00.226331 kernel: acpiphp: Slot [7] registered Feb 13 19:50:00.226349 kernel: acpiphp: Slot [8] registered Feb 13 19:50:00.226368 kernel: acpiphp: Slot [9] registered Feb 13 19:50:00.226387 kernel: acpiphp: Slot [10] registered Feb 13 19:50:00.226406 kernel: acpiphp: Slot [11] registered Feb 13 19:50:00.226426 kernel: acpiphp: Slot [12] registered Feb 13 19:50:00.226445 kernel: acpiphp: Slot [13] registered Feb 13 19:50:00.226463 kernel: acpiphp: Slot [14] registered Feb 13 19:50:00.226487 kernel: acpiphp: Slot [15] registered Feb 13 19:50:00.226507 kernel: acpiphp: Slot [16] registered Feb 13 19:50:00.226526 kernel: acpiphp: Slot [17] registered Feb 13 19:50:00.226544 kernel: acpiphp: Slot [18] registered Feb 13 19:50:00.226563 kernel: acpiphp: Slot [19] registered Feb 13 19:50:00.226582 kernel: acpiphp: Slot [20] registered Feb 13 19:50:00.226601 kernel: acpiphp: Slot [21] registered Feb 13 19:50:00.226620 kernel: acpiphp: Slot [22] registered Feb 13 19:50:00.226639 kernel: acpiphp: Slot [23] registered Feb 13 19:50:00.226663 kernel: acpiphp: Slot [24] registered Feb 13 19:50:00.226682 kernel: acpiphp: Slot [25] registered Feb 13 19:50:00.226701 kernel: acpiphp: Slot [26] registered Feb 13 19:50:00.226720 kernel: acpiphp: Slot [27] registered Feb 13 19:50:00.226738 kernel: acpiphp: Slot [28] registered Feb 13 19:50:00.229236 kernel: acpiphp: Slot [29] registered Feb 13 19:50:00.229259 kernel: acpiphp: Slot [30] registered Feb 13 19:50:00.229279 kernel: acpiphp: Slot [31] registered Feb 13 19:50:00.229298 kernel: PCI host bridge to bus 0000:00 Feb 13 19:50:00.229588 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:50:00.229837 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:50:00.230029 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:50:00.230220 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:50:00.230489 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:50:00.230734 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:50:00.231043 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:50:00.231293 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:50:00.231519 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:50:00.231739 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:50:00.232019 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:50:00.232248 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:50:00.232485 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:50:00.232708 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:50:00.235293 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:50:00.235561 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:50:00.235895 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:50:00.236128 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:50:00.236348 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:50:00.236581 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:50:00.236868 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:50:00.237067 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:50:00.237250 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:50:00.237277 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:50:00.237297 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:50:00.237316 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:50:00.237335 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:50:00.237355 kernel: iommu: Default domain type: Translated Feb 13 19:50:00.237373 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:50:00.237403 kernel: efivars: Registered efivars operations Feb 13 19:50:00.237423 kernel: vgaarb: loaded Feb 13 19:50:00.237443 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:50:00.237461 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:50:00.237480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:50:00.237498 kernel: pnp: PnP ACPI init Feb 13 19:50:00.237809 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:50:00.237846 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:50:00.237875 kernel: NET: Registered PF_INET protocol family Feb 13 19:50:00.237895 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:50:00.237914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:50:00.237933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:50:00.237951 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:50:00.237970 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:50:00.237989 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:50:00.238008 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:50:00.238026 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:50:00.238051 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:50:00.238070 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:50:00.238088 kernel: kvm [1]: HYP mode not available Feb 13 19:50:00.238107 kernel: Initialise system trusted keyrings Feb 13 19:50:00.238128 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:50:00.238147 kernel: Key type asymmetric registered Feb 13 19:50:00.238165 kernel: Asymmetric key parser 'x509' registered Feb 13 19:50:00.238183 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:50:00.238202 kernel: io scheduler mq-deadline registered Feb 13 19:50:00.238226 kernel: io scheduler kyber registered Feb 13 19:50:00.238246 kernel: io scheduler bfq registered Feb 13 19:50:00.238500 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:50:00.238532 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:50:00.238551 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:50:00.238570 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:50:00.238589 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:50:00.238607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:50:00.238633 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:50:00.238999 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:50:00.239031 kernel: printk: console [ttyS0] disabled Feb 13 19:50:00.239051 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:50:00.239070 kernel: printk: console [ttyS0] enabled Feb 13 19:50:00.239089 kernel: printk: bootconsole [uart0] disabled Feb 13 19:50:00.239107 kernel: thunder_xcv, ver 1.0 Feb 13 19:50:00.239126 kernel: thunder_bgx, ver 1.0 Feb 13 19:50:00.239144 kernel: nicpf, ver 1.0 Feb 13 19:50:00.239171 kernel: nicvf, ver 1.0 Feb 13 19:50:00.239389 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:50:00.239589 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:49:59 UTC (1739476199) Feb 13 19:50:00.239619 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:50:00.239640 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:50:00.239659 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:50:00.239677 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:50:00.239696 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:50:00.239722 kernel: Segment Routing with IPv6 Feb 13 19:50:00.239762 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:50:00.239788 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:50:00.239809 kernel: Key type dns_resolver registered Feb 13 19:50:00.239828 kernel: registered taskstats version 1 Feb 13 19:50:00.239847 kernel: Loading compiled-in X.509 certificates Feb 13 19:50:00.239866 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:50:00.239885 kernel: Key type .fscrypt registered Feb 13 19:50:00.239903 kernel: Key type fscrypt-provisioning registered Feb 13 19:50:00.239929 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:50:00.239949 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:50:00.239968 kernel: ima: No architecture policies found Feb 13 19:50:00.239986 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:50:00.240006 kernel: clk: Disabling unused clocks Feb 13 19:50:00.240025 kernel: Freeing unused kernel memory: 39360K Feb 13 19:50:00.240044 kernel: Run /init as init process Feb 13 19:50:00.240063 kernel: with arguments: Feb 13 19:50:00.240081 kernel: /init Feb 13 19:50:00.240100 kernel: with environment: Feb 13 19:50:00.240124 kernel: HOME=/ Feb 13 19:50:00.240143 kernel: TERM=linux Feb 13 19:50:00.240162 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:50:00.240187 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:00.240212 systemd[1]: Detected virtualization amazon. Feb 13 19:50:00.240234 systemd[1]: Detected architecture arm64. Feb 13 19:50:00.240254 systemd[1]: Running in initrd. Feb 13 19:50:00.240280 systemd[1]: No hostname configured, using default hostname. Feb 13 19:50:00.240300 systemd[1]: Hostname set to . Feb 13 19:50:00.240321 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:00.240341 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:50:00.240362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:00.240383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:00.240405 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:50:00.240427 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:00.240452 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:50:00.240473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:50:00.240496 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:50:00.240518 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:50:00.240538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:00.240558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:00.240578 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:00.240604 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:00.240624 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:00.240644 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:00.240664 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:00.240684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:00.240704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:50:00.240725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:50:00.240768 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:00.240841 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:00.240873 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:00.240894 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:00.240915 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:50:00.240937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:00.240958 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:50:00.240979 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:50:00.241001 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:00.241022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:00.241049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:00.241072 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:00.241093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:00.241115 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:50:00.241188 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:50:00.241244 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:00.241267 systemd-journald[251]: Journal started Feb 13 19:50:00.241311 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2b3e1eff2c3a0a9e479c2a96ead8c4) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:00.215859 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:50:00.257937 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:00.258017 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:50:00.261836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:00.276891 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:50:00.280951 kernel: Bridge firewalling registered Feb 13 19:50:00.284127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:00.297022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:00.302513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:00.303540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:00.316046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:00.317641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:00.357251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:00.380109 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:50:00.388439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:00.398603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:00.403588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:00.420529 dracut-cmdline[282]: dracut-dracut-053 Feb 13 19:50:00.427903 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:50:00.429467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:00.503588 systemd-resolved[295]: Positive Trust Anchors: Feb 13 19:50:00.503632 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:00.503697 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:00.597777 kernel: SCSI subsystem initialized Feb 13 19:50:00.602787 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:50:00.615783 kernel: iscsi: registered transport (tcp) Feb 13 19:50:00.638787 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:50:00.638857 kernel: QLogic iSCSI HBA Driver Feb 13 19:50:00.727796 kernel: random: crng init done Feb 13 19:50:00.728028 systemd-resolved[295]: Defaulting to hostname 'linux'. Feb 13 19:50:00.731456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:00.740728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:00.754837 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:00.768041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:50:00.810814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:50:00.810892 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:50:00.810921 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:50:00.879818 kernel: raid6: neonx8 gen() 6747 MB/s Feb 13 19:50:00.896781 kernel: raid6: neonx4 gen() 6546 MB/s Feb 13 19:50:00.913782 kernel: raid6: neonx2 gen() 5463 MB/s Feb 13 19:50:00.930797 kernel: raid6: neonx1 gen() 3958 MB/s Feb 13 19:50:00.947783 kernel: raid6: int64x8 gen() 3824 MB/s Feb 13 19:50:00.964780 kernel: raid6: int64x4 gen() 3726 MB/s Feb 13 19:50:00.981781 kernel: raid6: int64x2 gen() 3613 MB/s Feb 13 19:50:00.999519 kernel: raid6: int64x1 gen() 2761 MB/s Feb 13 19:50:00.999567 kernel: raid6: using algorithm neonx8 gen() 6747 MB/s Feb 13 19:50:01.017561 kernel: raid6: .... xor() 4811 MB/s, rmw enabled Feb 13 19:50:01.017643 kernel: raid6: using neon recovery algorithm Feb 13 19:50:01.025788 kernel: xor: measuring software checksum speed Feb 13 19:50:01.025867 kernel: 8regs : 10211 MB/sec Feb 13 19:50:01.027777 kernel: 32regs : 11152 MB/sec Feb 13 19:50:01.029779 kernel: arm64_neon : 8961 MB/sec Feb 13 19:50:01.029816 kernel: xor: using function: 32regs (11152 MB/sec) Feb 13 19:50:01.112805 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:50:01.133458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:01.143072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:01.185258 systemd-udevd[469]: Using default interface naming scheme 'v255'. Feb 13 19:50:01.195134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:01.216169 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:50:01.245149 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Feb 13 19:50:01.301630 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:01.312080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:01.431787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:01.460870 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:50:01.502303 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:01.517422 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:01.532962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:01.549538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:01.560072 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:50:01.615900 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:01.684581 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:50:01.684646 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:50:01.719602 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:50:01.720513 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:50:01.721185 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ba:1a:9c:98:73 Feb 13 19:50:01.698396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:01.698662 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:01.701594 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:01.717021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:01.717869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:01.722134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:01.735576 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:01.747235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:01.767737 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:50:01.767832 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:50:01.780850 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:50:01.787793 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:50:01.787868 kernel: GPT:9289727 != 16777215 Feb 13 19:50:01.789612 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:50:01.789688 kernel: GPT:9289727 != 16777215 Feb 13 19:50:01.789716 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:50:01.791452 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:01.796385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:01.810258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:01.848571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:01.905805 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (513) Feb 13 19:50:01.940861 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (520) Feb 13 19:50:02.026456 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:50:02.046683 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:50:02.075086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:02.080102 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:02.099558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:02.109107 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:50:02.124953 disk-uuid[661]: Primary Header is updated. Feb 13 19:50:02.124953 disk-uuid[661]: Secondary Entries is updated. Feb 13 19:50:02.124953 disk-uuid[661]: Secondary Header is updated. Feb 13 19:50:02.136805 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:02.144828 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:02.152822 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:03.156230 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:03.156302 disk-uuid[662]: The operation has completed successfully. Feb 13 19:50:03.339380 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:50:03.339621 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:50:03.401116 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:50:03.418109 sh[1005]: Success Feb 13 19:50:03.445080 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:50:03.561296 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:50:03.568966 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:50:03.584480 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:50:03.607445 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:50:03.607529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:03.607556 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:50:03.610399 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:50:03.610449 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:50:03.721787 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:50:03.749284 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:50:03.752737 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:50:03.769160 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:50:03.776057 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:50:03.808006 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:03.808092 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:03.808125 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:03.816292 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:03.835611 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:50:03.839869 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:03.850530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:50:03.861120 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:50:03.982916 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:03.995102 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:04.057877 systemd-networkd[1198]: lo: Link UP Feb 13 19:50:04.057903 systemd-networkd[1198]: lo: Gained carrier Feb 13 19:50:04.063231 systemd-networkd[1198]: Enumeration completed Feb 13 19:50:04.063406 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:04.065686 systemd[1]: Reached target network.target - Network. Feb 13 19:50:04.072934 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:04.072953 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:04.081225 systemd-networkd[1198]: eth0: Link UP Feb 13 19:50:04.081240 systemd-networkd[1198]: eth0: Gained carrier Feb 13 19:50:04.081259 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:04.099910 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.30.175/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:04.280406 ignition[1113]: Ignition 2.19.0 Feb 13 19:50:04.280436 ignition[1113]: Stage: fetch-offline Feb 13 19:50:04.282133 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:04.282163 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:04.287830 ignition[1113]: Ignition finished successfully Feb 13 19:50:04.291804 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:04.312292 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:50:04.335303 ignition[1208]: Ignition 2.19.0 Feb 13 19:50:04.336907 ignition[1208]: Stage: fetch Feb 13 19:50:04.339208 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:04.339242 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:04.339422 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:04.357608 ignition[1208]: PUT result: OK Feb 13 19:50:04.374557 ignition[1208]: parsed url from cmdline: "" Feb 13 19:50:04.374581 ignition[1208]: no config URL provided Feb 13 19:50:04.374601 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:50:04.374631 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:50:04.374670 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:04.377609 ignition[1208]: PUT result: OK Feb 13 19:50:04.377703 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:50:04.386447 ignition[1208]: GET result: OK Feb 13 19:50:04.386553 ignition[1208]: parsing config with SHA512: a0294e42386e1145eac4a11b53ab368e8c61b3825001ec01d5cebfcdd75afecbc13d86f4d75c66a662025a89bf4370c5c4bbbe70f19576a2756e553ba620f5ec Feb 13 19:50:04.394978 unknown[1208]: fetched base config from "system" Feb 13 19:50:04.395300 unknown[1208]: fetched base config from "system" Feb 13 19:50:04.395831 ignition[1208]: fetch: fetch complete Feb 13 19:50:04.395316 unknown[1208]: fetched user config from "aws" Feb 13 19:50:04.395872 ignition[1208]: fetch: fetch passed Feb 13 19:50:04.404976 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:50:04.395992 ignition[1208]: Ignition finished successfully Feb 13 19:50:04.420071 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:50:04.458658 ignition[1215]: Ignition 2.19.0 Feb 13 19:50:04.458691 ignition[1215]: Stage: kargs Feb 13 19:50:04.460471 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:04.460527 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:04.461817 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:04.466002 ignition[1215]: PUT result: OK Feb 13 19:50:04.472310 ignition[1215]: kargs: kargs passed Feb 13 19:50:04.472521 ignition[1215]: Ignition finished successfully Feb 13 19:50:04.476649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:50:04.491047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:50:04.515087 ignition[1221]: Ignition 2.19.0 Feb 13 19:50:04.515117 ignition[1221]: Stage: disks Feb 13 19:50:04.516323 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:04.516353 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:04.516526 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:04.518857 ignition[1221]: PUT result: OK Feb 13 19:50:04.529000 ignition[1221]: disks: disks passed Feb 13 19:50:04.530067 ignition[1221]: Ignition finished successfully Feb 13 19:50:04.534275 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:50:04.539410 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:04.544078 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:04.548443 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:04.554729 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:04.556895 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:04.565142 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:50:04.616213 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:50:04.621570 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:50:04.633008 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:50:04.738795 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:50:04.740217 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:50:04.744300 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:04.760972 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:04.767978 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:50:04.773580 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:50:04.780051 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:50:04.780564 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:04.804886 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Feb 13 19:50:04.808577 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:50:04.814314 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:04.814357 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:04.814384 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:04.823070 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:50:04.832822 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:04.836161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:05.143981 systemd-networkd[1198]: eth0: Gained IPv6LL Feb 13 19:50:05.181189 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:50:05.203544 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:50:05.227507 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:50:05.237291 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:50:05.571696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:05.591619 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:50:05.597151 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:50:05.617515 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:50:05.620954 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:05.661502 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:50:05.676525 ignition[1362]: INFO : Ignition 2.19.0 Feb 13 19:50:05.676525 ignition[1362]: INFO : Stage: mount Feb 13 19:50:05.679858 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:05.679858 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:05.684187 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:05.687213 ignition[1362]: INFO : PUT result: OK Feb 13 19:50:05.692209 ignition[1362]: INFO : mount: mount passed Feb 13 19:50:05.692209 ignition[1362]: INFO : Ignition finished successfully Feb 13 19:50:05.697481 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:50:05.717141 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:50:05.752108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:05.777830 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) Feb 13 19:50:05.782249 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:05.782327 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:05.782355 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:05.788793 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:05.793206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:05.836838 ignition[1390]: INFO : Ignition 2.19.0 Feb 13 19:50:05.836838 ignition[1390]: INFO : Stage: files Feb 13 19:50:05.840308 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:05.840308 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:05.840308 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:05.847864 ignition[1390]: INFO : PUT result: OK Feb 13 19:50:05.851851 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:50:05.855136 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:50:05.855136 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:50:05.883786 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:50:05.886476 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:50:05.889027 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:50:05.887409 unknown[1390]: wrote ssh authorized keys file for user: core Feb 13 19:50:05.903464 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:05.907059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:05.907059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:05.907059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:05.917449 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:50:05.917449 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:50:05.917449 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:50:05.917449 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:50:06.423297 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:50:06.787481 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:50:06.791990 ignition[1390]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:06.791990 ignition[1390]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:06.791990 ignition[1390]: INFO : files: files passed Feb 13 19:50:06.791990 ignition[1390]: INFO : Ignition finished successfully Feb 13 19:50:06.804420 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:50:06.815113 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:50:06.832498 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:50:06.844038 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:50:06.846299 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:50:06.862506 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:06.862506 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:06.871795 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:06.879185 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:06.884303 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:50:06.906225 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:50:06.963097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:50:06.963571 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:50:06.970544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:50:06.972951 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:50:06.975102 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:50:06.989122 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:50:07.029234 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:07.052292 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:50:07.079917 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:07.084214 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:07.087441 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:50:07.090135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:50:07.090437 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:07.101564 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:50:07.104729 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:50:07.109156 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:50:07.114950 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:07.117681 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:07.120542 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:50:07.128389 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:07.135464 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:50:07.137928 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:50:07.140231 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:50:07.143654 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:50:07.143992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:07.153871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:07.156713 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:07.160579 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:50:07.164865 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:07.168584 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:50:07.169083 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:07.178569 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:50:07.179519 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:07.186572 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:50:07.186899 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:50:07.198202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:50:07.212993 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:50:07.220359 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:50:07.220715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:07.224003 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:50:07.224307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:07.255725 ignition[1442]: INFO : Ignition 2.19.0 Feb 13 19:50:07.255725 ignition[1442]: INFO : Stage: umount Feb 13 19:50:07.262378 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:50:07.264603 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:07.264603 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:07.264603 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:07.267308 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:50:07.279638 ignition[1442]: INFO : PUT result: OK Feb 13 19:50:07.283713 ignition[1442]: INFO : umount: umount passed Feb 13 19:50:07.285471 ignition[1442]: INFO : Ignition finished successfully Feb 13 19:50:07.289443 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:50:07.292398 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:50:07.293967 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:50:07.300416 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:50:07.300639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:50:07.301598 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:50:07.301717 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:50:07.302438 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:50:07.302537 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:50:07.305066 systemd[1]: Stopped target network.target - Network. Feb 13 19:50:07.315208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:50:07.315342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:07.317720 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:50:07.319978 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:50:07.321914 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:07.325071 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:50:07.340113 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:50:07.342036 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:50:07.342128 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:07.344058 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:50:07.344156 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:07.346178 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:50:07.346272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:50:07.354654 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:50:07.354786 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:07.360127 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:50:07.368668 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:50:07.375829 systemd-networkd[1198]: eth0: DHCPv6 lease lost Feb 13 19:50:07.383409 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:50:07.385624 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:50:07.392386 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:50:07.392536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:07.413314 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:50:07.417497 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:50:07.419914 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:07.424941 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:07.430396 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:50:07.436016 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:50:07.453718 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:50:07.453983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:50:07.460251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:50:07.460436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:07.465122 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:50:07.465242 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:07.469124 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:50:07.469508 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:07.473586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:50:07.473694 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:07.481834 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:50:07.482116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:07.483225 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:50:07.483317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:07.483952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:50:07.484019 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:07.484568 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:50:07.484650 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:07.490846 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:50:07.490961 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:07.494862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:07.494960 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:07.532268 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:50:07.535891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:50:07.536024 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:07.539434 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:50:07.539549 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:07.545824 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:50:07.545927 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:07.549928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:07.550022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:07.553088 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:50:07.553412 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:50:07.590395 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:50:07.590846 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:50:07.598646 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:50:07.608038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:50:07.638165 systemd[1]: Switching root. Feb 13 19:50:07.684403 systemd-journald[251]: Journal stopped Feb 13 19:50:10.242530 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:50:10.242687 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:50:10.242737 kernel: SELinux: policy capability open_perms=1 Feb 13 19:50:10.242848 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:50:10.242884 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:50:10.242919 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:50:10.242952 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:50:10.242983 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:50:10.243022 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:50:10.243054 kernel: audit: type=1403 audit(1739476208.175:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:50:10.243099 systemd[1]: Successfully loaded SELinux policy in 70.838ms. Feb 13 19:50:10.243153 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.622ms. Feb 13 19:50:10.243189 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:10.243223 systemd[1]: Detected virtualization amazon. Feb 13 19:50:10.243256 systemd[1]: Detected architecture arm64. Feb 13 19:50:10.243295 systemd[1]: Detected first boot. Feb 13 19:50:10.243343 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:10.243380 zram_generator::config[1485]: No configuration found. Feb 13 19:50:10.243417 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:50:10.243453 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:50:10.243489 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:50:10.243524 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:50:10.243557 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:50:10.243589 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:50:10.243623 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:50:10.243656 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:50:10.243687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:50:10.243722 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:50:10.243814 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:50:10.243852 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:50:10.243891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:10.243925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:10.252885 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:50:10.252938 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:50:10.252973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:50:10.253011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:10.253045 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:50:10.253079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:10.253114 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:50:10.253155 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:50:10.253190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:10.253223 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:50:10.253258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:10.253292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:10.253326 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:10.253359 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:10.253393 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:50:10.253430 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:50:10.253464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:10.253498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:10.253531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:10.253564 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:50:10.253596 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:50:10.253630 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:50:10.253664 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:50:10.253699 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:50:10.253736 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:50:10.253819 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:50:10.253859 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:50:10.253893 systemd[1]: Reached target machines.target - Containers. Feb 13 19:50:10.253926 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:50:10.253958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:10.253989 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:10.254020 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:50:10.254061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:10.254093 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:10.254125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:10.254159 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:50:10.254191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:10.254222 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:50:10.254254 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:50:10.254285 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:50:10.254315 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:50:10.254353 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:50:10.254395 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:10.254431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:10.254462 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:50:10.254492 kernel: fuse: init (API version 7.39) Feb 13 19:50:10.254524 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:50:10.254556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:10.254592 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:50:10.254627 systemd[1]: Stopped verity-setup.service. Feb 13 19:50:10.254665 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:50:10.254698 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:50:10.254735 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:50:10.254891 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:50:10.254929 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:50:10.254964 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:50:10.254995 kernel: loop: module loaded Feb 13 19:50:10.255034 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:10.255067 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:50:10.255100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:50:10.255130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:10.255162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:10.255193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:10.255232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:10.255267 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:50:10.255298 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:50:10.255330 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:10.255363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:10.255405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:10.255443 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:50:10.255475 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:50:10.255507 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:50:10.255539 kernel: ACPI: bus type drm_connector registered Feb 13 19:50:10.255625 systemd-journald[1566]: Collecting audit messages is disabled. Feb 13 19:50:10.255693 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:50:10.255728 systemd-journald[1566]: Journal started Feb 13 19:50:10.255888 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec2b3e1eff2c3a0a9e479c2a96ead8c4) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:10.258070 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:50:09.528410 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:50:09.598169 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:50:09.599160 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:50:10.266972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:50:10.276320 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:10.293909 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:50:10.310783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:50:10.322886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:50:10.328872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:10.342811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:50:10.349427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:10.362201 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:50:10.362298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:10.377424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:10.388787 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:50:10.401027 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:10.409284 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:10.413862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:50:10.416957 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:10.419910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:10.422613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:50:10.425315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:50:10.428291 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:50:10.452158 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:50:10.505625 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:50:10.525122 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:50:10.533819 kernel: loop0: detected capacity change from 0 to 189592 Feb 13 19:50:10.537240 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:50:10.542882 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:10.590097 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:50:10.598533 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:50:10.603251 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:50:10.614068 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec2b3e1eff2c3a0a9e479c2a96ead8c4 is 109.545ms for 900 entries. Feb 13 19:50:10.614068 systemd-journald[1566]: System Journal (/var/log/journal/ec2b3e1eff2c3a0a9e479c2a96ead8c4) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:50:10.737018 systemd-journald[1566]: Received client request to flush runtime journal. Feb 13 19:50:10.737107 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 19:50:10.618120 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Feb 13 19:50:10.618145 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Feb 13 19:50:10.630152 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:10.640088 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:50:10.722416 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:50:10.743128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:10.746664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:10.749834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:50:10.769096 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:50:10.806797 kernel: loop2: detected capacity change from 0 to 52536 Feb 13 19:50:10.822944 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Feb 13 19:50:10.823589 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Feb 13 19:50:10.826154 udevadm[1638]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:50:10.835208 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:10.933062 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 19:50:11.045825 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 19:50:11.089833 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 19:50:11.105992 kernel: loop6: detected capacity change from 0 to 52536 Feb 13 19:50:11.120887 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 19:50:11.135981 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:50:11.138467 (sd-merge)[1643]: Merged extensions into '/usr'. Feb 13 19:50:11.146981 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:50:11.147023 systemd[1]: Reloading... Feb 13 19:50:11.332836 zram_generator::config[1669]: No configuration found. Feb 13 19:50:11.645666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:11.772694 systemd[1]: Reloading finished in 624 ms. Feb 13 19:50:11.817862 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:50:11.831227 systemd[1]: Starting ensure-sysext.service... Feb 13 19:50:11.843190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:11.892461 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:50:11.892502 systemd[1]: Reloading... Feb 13 19:50:11.932921 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:50:11.933664 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:50:11.938799 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:50:11.939489 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Feb 13 19:50:11.939644 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Feb 13 19:50:11.955181 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:11.955430 systemd-tmpfiles[1721]: Skipping /boot Feb 13 19:50:11.987241 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:11.989990 systemd-tmpfiles[1721]: Skipping /boot Feb 13 19:50:12.094810 zram_generator::config[1752]: No configuration found. Feb 13 19:50:12.345999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:12.374370 ldconfig[1592]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:50:12.465167 systemd[1]: Reloading finished in 571 ms. Feb 13 19:50:12.496722 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:50:12.499638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:50:12.512689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:12.541286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:12.548315 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:50:12.557966 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:50:12.568993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:12.577272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:12.592696 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:50:12.603028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:12.608284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:12.619347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:12.629296 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:12.632136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:12.637861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:12.638270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:12.650552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:12.657956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:12.660199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:12.660619 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:50:12.678858 systemd[1]: Finished ensure-sysext.service. Feb 13 19:50:12.721524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:50:12.726353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:12.726716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:12.741913 systemd-udevd[1810]: Using default interface naming scheme 'v255'. Feb 13 19:50:12.742910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:12.755892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:50:12.774234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:12.774613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:12.795369 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:50:12.805332 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:50:12.808133 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:12.808541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:12.812225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:12.827436 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:12.829916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:12.851042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:50:12.854032 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:50:12.871146 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:50:12.881834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:12.892219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:12.901714 augenrules[1844]: No rules Feb 13 19:50:12.908419 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:12.943895 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:50:13.144223 (udev-worker)[1852]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:13.148159 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:50:13.189534 systemd-networkd[1842]: lo: Link UP Feb 13 19:50:13.189552 systemd-networkd[1842]: lo: Gained carrier Feb 13 19:50:13.196008 systemd-networkd[1842]: Enumeration completed Feb 13 19:50:13.196358 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:13.202927 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:13.202954 systemd-networkd[1842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:13.210215 systemd-networkd[1842]: eth0: Link UP Feb 13 19:50:13.210629 systemd-networkd[1842]: eth0: Gained carrier Feb 13 19:50:13.210682 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:13.222447 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:50:13.234856 systemd-networkd[1842]: eth0: DHCPv4 address 172.31.30.175/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:13.240730 systemd-resolved[1809]: Positive Trust Anchors: Feb 13 19:50:13.243181 systemd-resolved[1809]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:13.243269 systemd-resolved[1809]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:13.264543 systemd-resolved[1809]: Defaulting to hostname 'linux'. Feb 13 19:50:13.271474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:13.273841 systemd[1]: Reached target network.target - Network. Feb 13 19:50:13.275922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:13.329486 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:13.382814 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1865) Feb 13 19:50:13.569266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:13.654657 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:13.668073 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:50:13.669810 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:50:13.679102 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:50:13.710210 lvm[1973]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:13.736524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:50:13.748223 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:50:13.751271 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:13.759772 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:50:13.782048 lvm[1977]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:13.810888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:13.814134 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:13.817095 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:50:13.819664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:50:13.823156 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:50:13.825573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:50:13.828454 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:50:13.831233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:50:13.831498 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:13.833424 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:13.837007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:50:13.842090 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:50:13.858289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:50:13.863941 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:50:13.867136 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:50:13.870640 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:13.873257 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:13.875904 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:13.876189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:13.883986 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:50:13.895149 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:50:13.902143 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:50:13.908423 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:50:13.919094 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:50:13.922002 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:50:13.928155 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:50:13.937125 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:50:13.950715 jq[1987]: false Feb 13 19:50:13.951406 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:50:13.957905 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:50:13.969646 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:50:13.980187 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:50:13.983275 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:50:13.984223 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:50:13.987579 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:50:13.995087 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:50:14.002658 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:50:14.005885 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:50:14.047508 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:50:14.049909 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:50:14.110180 jq[1996]: true Feb 13 19:50:14.139308 dbus-daemon[1986]: [system] SELinux support is enabled Feb 13 19:50:14.140027 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:50:14.147208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:50:14.147277 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:50:14.151003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:50:14.151042 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:50:14.163181 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1842 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:14.174687 update_engine[1995]: I20250213 19:50:14.173576 1995 main.cc:92] Flatcar Update Engine starting Feb 13 19:50:14.186025 update_engine[1995]: I20250213 19:50:14.176580 1995 update_check_scheduler.cc:74] Next update check in 6m4s Feb 13 19:50:14.186117 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:50:14.189434 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:50:14.197761 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: ---------------------------------------------------- Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: corporation. Support and training for ntp-4 are Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: available at https://www.nwtime.org/support Feb 13 19:50:14.212148 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: ---------------------------------------------------- Feb 13 19:50:14.212976 extend-filesystems[1988]: Found loop4 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found loop5 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found loop6 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found loop7 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p1 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p2 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p3 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found usr Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p4 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p6 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p7 Feb 13 19:50:14.212976 extend-filesystems[1988]: Found nvme0n1p9 Feb 13 19:50:14.212976 extend-filesystems[1988]: Checking size of /dev/nvme0n1p9 Feb 13 19:50:14.310544 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:50:14.210135 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:14.208592 (ntainerd)[2011]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:50:14.349925 jq[2016]: true Feb 13 19:50:14.350699 extend-filesystems[1988]: Resized partition /dev/nvme0n1p9 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: proto: precision = 0.096 usec (-23) Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: basedate set to 2025-02-01 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Listen normally on 3 eth0 172.31.30.175:123 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: bind(21) AF_INET6 fe80::4ba:1aff:fe9c:9873%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: unable to create socket on eth0 (5) for fe80::4ba:1aff:fe9c:9873%2#123 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: failed to init interface for address fe80::4ba:1aff:fe9c:9873%2 Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:14.353837 ntpd[1990]: 13 Feb 19:50:14 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:14.210197 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:14.220385 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:50:14.355601 extend-filesystems[2031]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:50:14.210219 ntpd[1990]: ---------------------------------------------------- Feb 13 19:50:14.222394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:50:14.210240 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:14.267083 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:50:14.210259 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:14.316924 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:50:14.210279 ntpd[1990]: corporation. Support and training for ntp-4 are Feb 13 19:50:14.210299 ntpd[1990]: available at https://www.nwtime.org/support Feb 13 19:50:14.210320 ntpd[1990]: ---------------------------------------------------- Feb 13 19:50:14.232865 ntpd[1990]: proto: precision = 0.096 usec (-23) Feb 13 19:50:14.259526 ntpd[1990]: basedate set to 2025-02-01 Feb 13 19:50:14.259568 ntpd[1990]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:14.295251 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:14.295355 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:14.295665 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:14.295740 ntpd[1990]: Listen normally on 3 eth0 172.31.30.175:123 Feb 13 19:50:14.307039 ntpd[1990]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:14.307139 ntpd[1990]: bind(21) AF_INET6 fe80::4ba:1aff:fe9c:9873%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:14.307188 ntpd[1990]: unable to create socket on eth0 (5) for fe80::4ba:1aff:fe9c:9873%2#123 Feb 13 19:50:14.307217 ntpd[1990]: failed to init interface for address fe80::4ba:1aff:fe9c:9873%2 Feb 13 19:50:14.307288 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Feb 13 19:50:14.332234 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:14.332294 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:14.398205 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:50:14.425187 extend-filesystems[2031]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:50:14.425187 extend-filesystems[2031]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:50:14.425187 extend-filesystems[2031]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:50:14.432384 extend-filesystems[1988]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:50:14.448269 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1865) Feb 13 19:50:14.454501 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:50:14.454933 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:50:14.521852 coreos-metadata[1985]: Feb 13 19:50:14.520 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:14.524427 coreos-metadata[1985]: Feb 13 19:50:14.524 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:50:14.525302 coreos-metadata[1985]: Feb 13 19:50:14.525 INFO Fetch successful Feb 13 19:50:14.525302 coreos-metadata[1985]: Feb 13 19:50:14.525 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:50:14.529354 coreos-metadata[1985]: Feb 13 19:50:14.529 INFO Fetch successful Feb 13 19:50:14.529354 coreos-metadata[1985]: Feb 13 19:50:14.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:50:14.529997 coreos-metadata[1985]: Feb 13 19:50:14.529 INFO Fetch successful Feb 13 19:50:14.529997 coreos-metadata[1985]: Feb 13 19:50:14.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:50:14.531504 coreos-metadata[1985]: Feb 13 19:50:14.530 INFO Fetch successful Feb 13 19:50:14.531504 coreos-metadata[1985]: Feb 13 19:50:14.530 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:50:14.536800 coreos-metadata[1985]: Feb 13 19:50:14.531 INFO Fetch failed with 404: resource not found Feb 13 19:50:14.536800 coreos-metadata[1985]: Feb 13 19:50:14.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:50:14.542199 coreos-metadata[1985]: Feb 13 19:50:14.541 INFO Fetch successful Feb 13 19:50:14.542199 coreos-metadata[1985]: Feb 13 19:50:14.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:50:14.542199 coreos-metadata[1985]: Feb 13 19:50:14.542 INFO Fetch successful Feb 13 19:50:14.542199 coreos-metadata[1985]: Feb 13 19:50:14.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:50:14.543640 coreos-metadata[1985]: Feb 13 19:50:14.542 INFO Fetch successful Feb 13 19:50:14.543640 coreos-metadata[1985]: Feb 13 19:50:14.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:50:14.544310 coreos-metadata[1985]: Feb 13 19:50:14.544 INFO Fetch successful Feb 13 19:50:14.544310 coreos-metadata[1985]: Feb 13 19:50:14.544 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:50:14.552882 coreos-metadata[1985]: Feb 13 19:50:14.552 INFO Fetch successful Feb 13 19:50:14.616681 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:50:14.619479 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:50:14.620027 systemd-logind[1994]: New seat seat0. Feb 13 19:50:14.633096 bash[2074]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:14.635994 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:50:14.703903 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:50:14.743344 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:50:14.747177 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:14.768489 systemd[1]: Starting sshkeys.service... Feb 13 19:50:14.774519 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:50:14.791265 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:50:14.851850 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:50:14.857586 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:50:14.859109 locksmithd[2022]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:50:14.874497 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:50:14.897504 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:50:14.947356 polkitd[2129]: Started polkitd version 121 Feb 13 19:50:15.006531 polkitd[2129]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:50:15.006683 polkitd[2129]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:50:15.022373 polkitd[2129]: Finished loading, compiling and executing 2 rules Feb 13 19:50:15.029567 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:50:15.030458 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:50:15.034422 polkitd[2129]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:50:15.106267 containerd[2011]: time="2025-02-13T19:50:15.105582009Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:50:15.140615 systemd-hostnamed[2021]: Hostname set to (transient) Feb 13 19:50:15.141462 systemd-resolved[1809]: System hostname changed to 'ip-172-31-30-175'. Feb 13 19:50:15.192006 systemd-networkd[1842]: eth0: Gained IPv6LL Feb 13 19:50:15.194682 coreos-metadata[2142]: Feb 13 19:50:15.193 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:15.197034 coreos-metadata[2142]: Feb 13 19:50:15.195 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:50:15.202592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:50:15.205483 coreos-metadata[2142]: Feb 13 19:50:15.200 INFO Fetch successful Feb 13 19:50:15.205483 coreos-metadata[2142]: Feb 13 19:50:15.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:50:15.207356 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:50:15.216816 coreos-metadata[2142]: Feb 13 19:50:15.209 INFO Fetch successful Feb 13 19:50:15.222117 containerd[2011]: time="2025-02-13T19:50:15.222031906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.224480 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:50:15.224508 unknown[2142]: wrote ssh authorized keys file for user: core Feb 13 19:50:15.240275 containerd[2011]: time="2025-02-13T19:50:15.240204910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.240946090Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.243244534Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.243648562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.243837982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.244039894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.244082230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.244487482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.244535494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.244569466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:15.245468 containerd[2011]: time="2025-02-13T19:50:15.244595518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.241269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:15.248225 containerd[2011]: time="2025-02-13T19:50:15.246320206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.248225 containerd[2011]: time="2025-02-13T19:50:15.246914878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:15.250890 containerd[2011]: time="2025-02-13T19:50:15.249103750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:15.250890 containerd[2011]: time="2025-02-13T19:50:15.249172594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:50:15.250890 containerd[2011]: time="2025-02-13T19:50:15.249439210Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:50:15.250890 containerd[2011]: time="2025-02-13T19:50:15.249579394Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:50:15.253804 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:50:15.274943 containerd[2011]: time="2025-02-13T19:50:15.269866774Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:50:15.274943 containerd[2011]: time="2025-02-13T19:50:15.269988502Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:50:15.274943 containerd[2011]: time="2025-02-13T19:50:15.270384154Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:50:15.274943 containerd[2011]: time="2025-02-13T19:50:15.270499198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:50:15.274943 containerd[2011]: time="2025-02-13T19:50:15.270573526Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:50:15.274943 containerd[2011]: time="2025-02-13T19:50:15.274270570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:50:15.280633 containerd[2011]: time="2025-02-13T19:50:15.275557402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282028270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282079294Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282111202Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282160066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282197110Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282233902Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282270190Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282305710Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282430066Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282473422Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282502870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282550030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282583978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.287731 containerd[2011]: time="2025-02-13T19:50:15.282615358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282665362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282698734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282731086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282792490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282826018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282880546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282920182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282955906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.282988174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.283019662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.283062874Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.283125442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.283157470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.288450 containerd[2011]: time="2025-02-13T19:50:15.283188202Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283351426Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283403002Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283432498Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283475470Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283502302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283551274Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283589074Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:50:15.289142 containerd[2011]: time="2025-02-13T19:50:15.283632286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.299873 containerd[2011]: time="2025-02-13T19:50:15.296408086Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:50:15.299873 containerd[2011]: time="2025-02-13T19:50:15.296556106Z" level=info msg="Connect containerd service" Feb 13 19:50:15.299873 containerd[2011]: time="2025-02-13T19:50:15.296631082Z" level=info msg="using legacy CRI server" Feb 13 19:50:15.299873 containerd[2011]: time="2025-02-13T19:50:15.296651818Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:50:15.299873 containerd[2011]: time="2025-02-13T19:50:15.296904658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:50:15.314931 containerd[2011]: time="2025-02-13T19:50:15.313407550Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:50:15.321953 containerd[2011]: time="2025-02-13T19:50:15.321881710Z" level=info msg="Start subscribing containerd event" Feb 13 19:50:15.322655 containerd[2011]: time="2025-02-13T19:50:15.322571698Z" level=info msg="Start recovering state" Feb 13 19:50:15.326821 containerd[2011]: time="2025-02-13T19:50:15.325256638Z" level=info msg="Start event monitor" Feb 13 19:50:15.326821 containerd[2011]: time="2025-02-13T19:50:15.325314538Z" level=info msg="Start snapshots syncer" Feb 13 19:50:15.326821 containerd[2011]: time="2025-02-13T19:50:15.325339102Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:50:15.326821 containerd[2011]: time="2025-02-13T19:50:15.325359610Z" level=info msg="Start streaming server" Feb 13 19:50:15.326821 containerd[2011]: time="2025-02-13T19:50:15.324176542Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:50:15.326821 containerd[2011]: time="2025-02-13T19:50:15.325716310Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:50:15.335026 containerd[2011]: time="2025-02-13T19:50:15.328013758Z" level=info msg="containerd successfully booted in 0.233309s" Feb 13 19:50:15.328180 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:50:15.351011 update-ssh-keys[2187]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:15.349922 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:50:15.362836 systemd[1]: Finished sshkeys.service. Feb 13 19:50:15.403409 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:50:15.407614 amazon-ssm-agent[2182]: Initializing new seelog logger Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: New Seelog Logger Creation Complete Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 processing appconfig overrides Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 processing appconfig overrides Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 processing appconfig overrides Feb 13 19:50:15.411946 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO Proxy environment variables: Feb 13 19:50:15.415126 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.415126 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:15.415322 amazon-ssm-agent[2182]: 2025/02/13 19:50:15 processing appconfig overrides Feb 13 19:50:15.511301 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO https_proxy: Feb 13 19:50:15.609590 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO http_proxy: Feb 13 19:50:15.709834 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO no_proxy: Feb 13 19:50:15.806116 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:50:15.905705 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:50:16.005318 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO Agent will take identity from EC2 Feb 13 19:50:16.033443 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:50:16.103921 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:16.117911 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:50:16.137207 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:50:16.151062 systemd[1]: Started sshd@0-172.31.30.175:22-139.178.89.65:53360.service - OpenSSH per-connection server daemon (139.178.89.65:53360). Feb 13 19:50:16.172701 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:50:16.173150 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:50:16.185921 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:50:16.202899 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:16.223416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:50:16.235447 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:50:16.248381 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:50:16.250893 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:50:16.302937 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:16.402616 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:50:16.449799 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 53360 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:16.455547 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:16.483697 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:50:16.499367 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:50:16.506923 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:50:16.521394 systemd-logind[1994]: New session 1 of user core. Feb 13 19:50:16.538134 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:50:16.560305 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:50:16.585310 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:50:16.604224 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:50:16.706432 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:50:16.807119 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [Registrar] Starting registrar module Feb 13 19:50:16.882911 amazon-ssm-agent[2182]: 2025-02-13 19:50:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:50:16.883033 amazon-ssm-agent[2182]: 2025-02-13 19:50:16 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:50:16.883108 amazon-ssm-agent[2182]: 2025-02-13 19:50:16 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:50:16.883166 amazon-ssm-agent[2182]: 2025-02-13 19:50:16 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:50:16.883166 amazon-ssm-agent[2182]: 2025-02-13 19:50:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:50:16.883391 systemd[2226]: Queued start job for default target default.target. Feb 13 19:50:16.897730 systemd[2226]: Created slice app.slice - User Application Slice. Feb 13 19:50:16.898058 systemd[2226]: Reached target paths.target - Paths. Feb 13 19:50:16.898094 systemd[2226]: Reached target timers.target - Timers. Feb 13 19:50:16.901070 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:50:16.907342 amazon-ssm-agent[2182]: 2025-02-13 19:50:16 INFO [CredentialRefresher] Next credential rotation will be in 31.8749794041 minutes Feb 13 19:50:16.942059 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:50:16.944611 systemd[2226]: Reached target sockets.target - Sockets. Feb 13 19:50:16.944677 systemd[2226]: Reached target basic.target - Basic System. Feb 13 19:50:16.944866 systemd[2226]: Reached target default.target - Main User Target. Feb 13 19:50:16.944941 systemd[2226]: Startup finished in 345ms. Feb 13 19:50:16.945379 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:50:16.954111 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:50:17.124377 systemd[1]: Started sshd@1-172.31.30.175:22-139.178.89.65:40214.service - OpenSSH per-connection server daemon (139.178.89.65:40214). Feb 13 19:50:17.176047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:17.179300 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:50:17.184696 systemd[1]: Startup finished in 1.166s (kernel) + 8.355s (initrd) + 9.077s (userspace) = 18.599s. Feb 13 19:50:17.189007 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:50:17.210943 ntpd[1990]: Listen normally on 6 eth0 [fe80::4ba:1aff:fe9c:9873%2]:123 Feb 13 19:50:17.211616 ntpd[1990]: 13 Feb 19:50:17 ntpd[1990]: Listen normally on 6 eth0 [fe80::4ba:1aff:fe9c:9873%2]:123 Feb 13 19:50:17.323443 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 40214 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:17.326925 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:17.338255 systemd-logind[1994]: New session 2 of user core. Feb 13 19:50:17.346050 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:50:17.479223 sshd[2238]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:17.486847 systemd[1]: sshd@1-172.31.30.175:22-139.178.89.65:40214.service: Deactivated successfully. Feb 13 19:50:17.491155 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:50:17.496208 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:50:17.498676 systemd-logind[1994]: Removed session 2. Feb 13 19:50:17.525998 systemd[1]: Started sshd@2-172.31.30.175:22-139.178.89.65:40228.service - OpenSSH per-connection server daemon (139.178.89.65:40228). Feb 13 19:50:17.701320 sshd[2259]: Accepted publickey for core from 139.178.89.65 port 40228 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:17.703667 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:17.714714 systemd-logind[1994]: New session 3 of user core. Feb 13 19:50:17.719483 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:50:17.842180 sshd[2259]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:17.849980 systemd[1]: sshd@2-172.31.30.175:22-139.178.89.65:40228.service: Deactivated successfully. Feb 13 19:50:17.854942 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:50:17.860069 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:50:17.883441 systemd[1]: Started sshd@3-172.31.30.175:22-139.178.89.65:40236.service - OpenSSH per-connection server daemon (139.178.89.65:40236). Feb 13 19:50:17.886703 systemd-logind[1994]: Removed session 3. Feb 13 19:50:17.918850 amazon-ssm-agent[2182]: 2025-02-13 19:50:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:50:18.020659 amazon-ssm-agent[2182]: 2025-02-13 19:50:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Feb 13 19:50:18.089174 sshd[2266]: Accepted publickey for core from 139.178.89.65 port 40236 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:18.093247 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:18.112434 systemd-logind[1994]: New session 4 of user core. Feb 13 19:50:18.118059 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:50:18.123551 amazon-ssm-agent[2182]: 2025-02-13 19:50:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:50:18.263449 sshd[2266]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:18.272979 systemd[1]: sshd@3-172.31.30.175:22-139.178.89.65:40236.service: Deactivated successfully. Feb 13 19:50:18.276487 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:50:18.282207 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:50:18.298885 kubelet[2245]: E0213 19:50:18.298816 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:50:18.306324 systemd[1]: Started sshd@4-172.31.30.175:22-139.178.89.65:40238.service - OpenSSH per-connection server daemon (139.178.89.65:40238). Feb 13 19:50:18.307270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:50:18.307611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:50:18.309281 systemd[1]: kubelet.service: Consumed 1.316s CPU time. Feb 13 19:50:18.313615 systemd-logind[1994]: Removed session 4. Feb 13 19:50:18.484415 sshd[2286]: Accepted publickey for core from 139.178.89.65 port 40238 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:18.487208 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:18.497160 systemd-logind[1994]: New session 5 of user core. Feb 13 19:50:18.505073 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:50:18.645285 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:50:18.646075 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:18.663090 sudo[2290]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:18.687233 sshd[2286]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:18.693461 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:50:18.695307 systemd[1]: sshd@4-172.31.30.175:22-139.178.89.65:40238.service: Deactivated successfully. Feb 13 19:50:18.698465 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:50:18.703495 systemd-logind[1994]: Removed session 5. Feb 13 19:50:18.736338 systemd[1]: Started sshd@5-172.31.30.175:22-139.178.89.65:40254.service - OpenSSH per-connection server daemon (139.178.89.65:40254). Feb 13 19:50:18.908069 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 40254 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:18.911085 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:18.921113 systemd-logind[1994]: New session 6 of user core. Feb 13 19:50:18.924048 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:50:19.031513 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:50:19.032354 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:19.039619 sudo[2299]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:19.051015 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:50:19.051902 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:19.074350 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:19.089931 auditctl[2302]: No rules Feb 13 19:50:19.090817 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:50:19.091246 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:19.099419 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:19.159792 augenrules[2320]: No rules Feb 13 19:50:19.162907 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:19.165420 sudo[2298]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:19.189145 sshd[2295]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:19.196426 systemd[1]: sshd@5-172.31.30.175:22-139.178.89.65:40254.service: Deactivated successfully. Feb 13 19:50:19.201073 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:50:19.202511 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:50:19.204988 systemd-logind[1994]: Removed session 6. Feb 13 19:50:19.234792 systemd[1]: Started sshd@6-172.31.30.175:22-139.178.89.65:40264.service - OpenSSH per-connection server daemon (139.178.89.65:40264). Feb 13 19:50:19.409617 sshd[2328]: Accepted publickey for core from 139.178.89.65 port 40264 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:19.412436 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:19.421796 systemd-logind[1994]: New session 7 of user core. Feb 13 19:50:19.432098 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:50:19.537262 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:50:19.538634 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:20.565916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:20.566302 systemd[1]: kubelet.service: Consumed 1.316s CPU time. Feb 13 19:50:20.576181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:20.640461 systemd[1]: Reloading requested from client PID 2363 ('systemctl') (unit session-7.scope)... Feb 13 19:50:20.640696 systemd[1]: Reloading... Feb 13 19:50:20.882814 zram_generator::config[2406]: No configuration found. Feb 13 19:50:21.164955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:21.601874 systemd-resolved[1809]: Clock change detected. Flushing caches. Feb 13 19:50:21.734663 systemd[1]: Reloading finished in 703 ms. Feb 13 19:50:21.831148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:21.838623 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:21.845969 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:50:21.846378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:21.861933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:22.180749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:22.192664 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:50:22.266375 kubelet[2468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:22.267673 kubelet[2468]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:50:22.268142 kubelet[2468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:22.268537 kubelet[2468]: I0213 19:50:22.268461 2468 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:50:24.584186 kubelet[2468]: I0213 19:50:24.584135 2468 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:50:24.585127 kubelet[2468]: I0213 19:50:24.584774 2468 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:50:24.588678 kubelet[2468]: I0213 19:50:24.588621 2468 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:50:24.639684 kubelet[2468]: I0213 19:50:24.639621 2468 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:50:24.657613 kubelet[2468]: E0213 19:50:24.657367 2468 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:50:24.657613 kubelet[2468]: I0213 19:50:24.657452 2468 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:50:24.664686 kubelet[2468]: I0213 19:50:24.664640 2468 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:50:24.666294 kubelet[2468]: I0213 19:50:24.665042 2468 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:50:24.666294 kubelet[2468]: I0213 19:50:24.665427 2468 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:50:24.666294 kubelet[2468]: I0213 19:50:24.665477 2468 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.30.175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:50:24.666294 kubelet[2468]: I0213 19:50:24.665827 2468 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:50:24.666717 kubelet[2468]: I0213 19:50:24.665846 2468 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:50:24.666717 kubelet[2468]: I0213 19:50:24.666043 2468 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:24.667766 kubelet[2468]: I0213 19:50:24.667734 2468 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:50:24.667920 kubelet[2468]: I0213 19:50:24.667898 2468 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:50:24.668060 kubelet[2468]: I0213 19:50:24.668042 2468 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:50:24.668159 kubelet[2468]: I0213 19:50:24.668140 2468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:50:24.668830 kubelet[2468]: E0213 19:50:24.668775 2468 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:24.668933 kubelet[2468]: E0213 19:50:24.668853 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:24.675308 kubelet[2468]: I0213 19:50:24.675274 2468 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:50:24.680132 kubelet[2468]: I0213 19:50:24.680076 2468 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:50:24.681276 kubelet[2468]: W0213 19:50:24.680886 2468 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.30.175" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:50:24.681405 kubelet[2468]: E0213 19:50:24.681330 2468 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.30.175\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:50:24.681405 kubelet[2468]: W0213 19:50:24.681055 2468 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:50:24.681501 kubelet[2468]: E0213 19:50:24.681412 2468 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:50:24.682432 kubelet[2468]: W0213 19:50:24.681573 2468 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:50:24.683038 kubelet[2468]: I0213 19:50:24.682999 2468 server.go:1269] "Started kubelet" Feb 13 19:50:24.685897 kubelet[2468]: I0213 19:50:24.685859 2468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:50:24.697871 kubelet[2468]: I0213 19:50:24.697783 2468 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:50:24.700419 kubelet[2468]: I0213 19:50:24.699503 2468 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:50:24.703466 kubelet[2468]: I0213 19:50:24.701836 2468 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:50:24.703466 kubelet[2468]: E0213 19:50:24.702495 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:24.703466 kubelet[2468]: I0213 19:50:24.703075 2468 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:50:24.703466 kubelet[2468]: I0213 19:50:24.703295 2468 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:50:24.704149 kubelet[2468]: I0213 19:50:24.704038 2468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:50:24.704633 kubelet[2468]: I0213 19:50:24.704485 2468 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:50:24.705737 kubelet[2468]: I0213 19:50:24.705682 2468 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:50:24.708696 kubelet[2468]: I0213 19:50:24.708628 2468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:50:24.715631 kubelet[2468]: I0213 19:50:24.715580 2468 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:50:24.715631 kubelet[2468]: I0213 19:50:24.715619 2468 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:50:24.725468 kubelet[2468]: E0213 19:50:24.725155 2468 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:50:24.740047 kubelet[2468]: E0213 19:50:24.739967 2468 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.30.175\" not found" node="172.31.30.175" Feb 13 19:50:24.760042 kubelet[2468]: I0213 19:50:24.759902 2468 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:50:24.760042 kubelet[2468]: I0213 19:50:24.759944 2468 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:50:24.760042 kubelet[2468]: I0213 19:50:24.759984 2468 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:24.763302 kubelet[2468]: I0213 19:50:24.763249 2468 policy_none.go:49] "None policy: Start" Feb 13 19:50:24.764667 kubelet[2468]: I0213 19:50:24.764572 2468 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:50:24.764667 kubelet[2468]: I0213 19:50:24.764625 2468 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:50:24.774950 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:50:24.797131 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:50:24.802757 kubelet[2468]: E0213 19:50:24.802657 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:24.803512 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:50:24.814484 kubelet[2468]: I0213 19:50:24.814371 2468 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:50:24.815211 kubelet[2468]: I0213 19:50:24.814735 2468 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:50:24.815211 kubelet[2468]: I0213 19:50:24.814771 2468 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:50:24.815899 kubelet[2468]: I0213 19:50:24.815838 2468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:50:24.820738 kubelet[2468]: E0213 19:50:24.820671 2468 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.30.175\" not found" Feb 13 19:50:24.885801 kubelet[2468]: I0213 19:50:24.885613 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:50:24.892966 kubelet[2468]: I0213 19:50:24.892598 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:50:24.892966 kubelet[2468]: I0213 19:50:24.892676 2468 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:50:24.892966 kubelet[2468]: I0213 19:50:24.892743 2468 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:50:24.892966 kubelet[2468]: E0213 19:50:24.892845 2468 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:50:24.917684 kubelet[2468]: I0213 19:50:24.916942 2468 kubelet_node_status.go:72] "Attempting to register node" node="172.31.30.175" Feb 13 19:50:24.932372 kubelet[2468]: I0213 19:50:24.932298 2468 kubelet_node_status.go:75] "Successfully registered node" node="172.31.30.175" Feb 13 19:50:24.932372 kubelet[2468]: E0213 19:50:24.932356 2468 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.30.175\": node \"172.31.30.175\" not found" Feb 13 19:50:24.996482 kubelet[2468]: E0213 19:50:24.996434 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.096986 kubelet[2468]: E0213 19:50:25.096919 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.197974 kubelet[2468]: E0213 19:50:25.197563 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.265598 sudo[2331]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:25.288837 sshd[2328]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:25.297078 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:50:25.297709 systemd[1]: sshd@6-172.31.30.175:22-139.178.89.65:40264.service: Deactivated successfully. Feb 13 19:50:25.298117 kubelet[2468]: E0213 19:50:25.297751 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.302931 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:50:25.307002 systemd-logind[1994]: Removed session 7. Feb 13 19:50:25.398002 kubelet[2468]: E0213 19:50:25.397930 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.498794 kubelet[2468]: E0213 19:50:25.498632 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.593329 kubelet[2468]: I0213 19:50:25.593215 2468 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:50:25.594139 kubelet[2468]: W0213 19:50:25.593519 2468 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:50:25.594139 kubelet[2468]: W0213 19:50:25.593525 2468 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:50:25.599423 kubelet[2468]: E0213 19:50:25.599339 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.669628 kubelet[2468]: E0213 19:50:25.669541 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:25.700454 kubelet[2468]: E0213 19:50:25.700364 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.801334 kubelet[2468]: E0213 19:50:25.801185 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:25.901870 kubelet[2468]: E0213 19:50:25.901820 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:26.002665 kubelet[2468]: E0213 19:50:26.002598 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:26.103606 kubelet[2468]: E0213 19:50:26.103444 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:26.203736 kubelet[2468]: E0213 19:50:26.203676 2468 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.175\" not found" Feb 13 19:50:26.305713 kubelet[2468]: I0213 19:50:26.305511 2468 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:50:26.306373 containerd[2011]: time="2025-02-13T19:50:26.306299215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:50:26.307427 kubelet[2468]: I0213 19:50:26.307363 2468 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:50:26.670671 kubelet[2468]: E0213 19:50:26.670577 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:26.671340 kubelet[2468]: I0213 19:50:26.670698 2468 apiserver.go:52] "Watching apiserver" Feb 13 19:50:26.677038 kubelet[2468]: E0213 19:50:26.676327 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:26.690534 systemd[1]: Created slice kubepods-besteffort-pod573a3bf6_1ada_4555_95ca_10bdd7c33ecd.slice - libcontainer container kubepods-besteffort-pod573a3bf6_1ada_4555_95ca_10bdd7c33ecd.slice. Feb 13 19:50:26.703882 kubelet[2468]: I0213 19:50:26.703838 2468 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:50:26.707127 systemd[1]: Created slice kubepods-besteffort-pod6aab4654_3b26_4c92_869b_dcfe9750ca75.slice - libcontainer container kubepods-besteffort-pod6aab4654_3b26_4c92_869b_dcfe9750ca75.slice. Feb 13 19:50:26.716873 kubelet[2468]: I0213 19:50:26.715540 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-policysync\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.716873 kubelet[2468]: I0213 19:50:26.715634 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-node-certs\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.716873 kubelet[2468]: I0213 19:50:26.715717 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-var-lib-calico\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.716873 kubelet[2468]: I0213 19:50:26.715754 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-cni-log-dir\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.716873 kubelet[2468]: I0213 19:50:26.715816 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-flexvol-driver-host\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717299 kubelet[2468]: I0213 19:50:26.715876 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7291b7b5-988a-4b1a-bff8-f08c641e7de1-varrun\") pod \"csi-node-driver-7wdmb\" (UID: \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\") " pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:26.717299 kubelet[2468]: I0213 19:50:26.715915 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6aab4654-3b26-4c92-869b-dcfe9750ca75-kube-proxy\") pod \"kube-proxy-xnqbj\" (UID: \"6aab4654-3b26-4c92-869b-dcfe9750ca75\") " pod="kube-system/kube-proxy-xnqbj" Feb 13 19:50:26.717299 kubelet[2468]: I0213 19:50:26.715990 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-lib-modules\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717299 kubelet[2468]: I0213 19:50:26.716052 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjdf\" (UniqueName: \"kubernetes.io/projected/6aab4654-3b26-4c92-869b-dcfe9750ca75-kube-api-access-4cjdf\") pod \"kube-proxy-xnqbj\" (UID: \"6aab4654-3b26-4c92-869b-dcfe9750ca75\") " pod="kube-system/kube-proxy-xnqbj" Feb 13 19:50:26.717299 kubelet[2468]: I0213 19:50:26.716095 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-cni-bin-dir\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717610 kubelet[2468]: I0213 19:50:26.716159 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-cni-net-dir\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717610 kubelet[2468]: I0213 19:50:26.716198 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7291b7b5-988a-4b1a-bff8-f08c641e7de1-registration-dir\") pod \"csi-node-driver-7wdmb\" (UID: \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\") " pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:26.717610 kubelet[2468]: I0213 19:50:26.716257 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6aab4654-3b26-4c92-869b-dcfe9750ca75-xtables-lock\") pod \"kube-proxy-xnqbj\" (UID: \"6aab4654-3b26-4c92-869b-dcfe9750ca75\") " pod="kube-system/kube-proxy-xnqbj" Feb 13 19:50:26.717610 kubelet[2468]: I0213 19:50:26.716328 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-tigera-ca-bundle\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717610 kubelet[2468]: I0213 19:50:26.716369 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwt5\" (UniqueName: \"kubernetes.io/projected/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-kube-api-access-wjwt5\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717870 kubelet[2468]: I0213 19:50:26.716470 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7291b7b5-988a-4b1a-bff8-f08c641e7de1-socket-dir\") pod \"csi-node-driver-7wdmb\" (UID: \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\") " pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:26.717870 kubelet[2468]: I0213 19:50:26.716532 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-xtables-lock\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.717870 kubelet[2468]: I0213 19:50:26.716573 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7291b7b5-988a-4b1a-bff8-f08c641e7de1-kubelet-dir\") pod \"csi-node-driver-7wdmb\" (UID: \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\") " pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:26.717870 kubelet[2468]: I0213 19:50:26.716637 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvdk6\" (UniqueName: \"kubernetes.io/projected/7291b7b5-988a-4b1a-bff8-f08c641e7de1-kube-api-access-nvdk6\") pod \"csi-node-driver-7wdmb\" (UID: \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\") " pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:26.717870 kubelet[2468]: I0213 19:50:26.716678 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6aab4654-3b26-4c92-869b-dcfe9750ca75-lib-modules\") pod \"kube-proxy-xnqbj\" (UID: \"6aab4654-3b26-4c92-869b-dcfe9750ca75\") " pod="kube-system/kube-proxy-xnqbj" Feb 13 19:50:26.718119 kubelet[2468]: I0213 19:50:26.716748 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/573a3bf6-1ada-4555-95ca-10bdd7c33ecd-var-run-calico\") pod \"calico-node-dh7bc\" (UID: \"573a3bf6-1ada-4555-95ca-10bdd7c33ecd\") " pod="calico-system/calico-node-dh7bc" Feb 13 19:50:26.823762 kubelet[2468]: E0213 19:50:26.821749 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.823762 kubelet[2468]: W0213 19:50:26.821823 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.823762 kubelet[2468]: E0213 19:50:26.821893 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.824551 kubelet[2468]: E0213 19:50:26.824380 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.824551 kubelet[2468]: W0213 19:50:26.824543 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.824743 kubelet[2468]: E0213 19:50:26.824621 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.826922 kubelet[2468]: E0213 19:50:26.826822 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.826922 kubelet[2468]: W0213 19:50:26.826879 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.826922 kubelet[2468]: E0213 19:50:26.826917 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.833299 kubelet[2468]: E0213 19:50:26.833165 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.833299 kubelet[2468]: W0213 19:50:26.833229 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.833970 kubelet[2468]: E0213 19:50:26.833277 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.834076 kubelet[2468]: E0213 19:50:26.833954 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.834076 kubelet[2468]: W0213 19:50:26.834001 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.834584 kubelet[2468]: E0213 19:50:26.834050 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.834862 kubelet[2468]: E0213 19:50:26.834606 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.834862 kubelet[2468]: W0213 19:50:26.834629 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.834862 kubelet[2468]: E0213 19:50:26.834655 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.835245 kubelet[2468]: E0213 19:50:26.835156 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.835245 kubelet[2468]: W0213 19:50:26.835188 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.835245 kubelet[2468]: E0213 19:50:26.835241 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.835817 kubelet[2468]: E0213 19:50:26.835710 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.835817 kubelet[2468]: W0213 19:50:26.835741 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.835817 kubelet[2468]: E0213 19:50:26.835789 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.836487 kubelet[2468]: E0213 19:50:26.836329 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.836487 kubelet[2468]: W0213 19:50:26.836365 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.836487 kubelet[2468]: E0213 19:50:26.836453 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.852341 kubelet[2468]: E0213 19:50:26.850561 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.852341 kubelet[2468]: W0213 19:50:26.850598 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.852341 kubelet[2468]: E0213 19:50:26.850654 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.863704 kubelet[2468]: E0213 19:50:26.861827 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.863704 kubelet[2468]: W0213 19:50:26.861861 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.863704 kubelet[2468]: E0213 19:50:26.861891 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:26.864324 kubelet[2468]: E0213 19:50:26.864281 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:26.864429 kubelet[2468]: W0213 19:50:26.864317 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:26.864429 kubelet[2468]: E0213 19:50:26.864374 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:27.008973 containerd[2011]: time="2025-02-13T19:50:27.006762090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dh7bc,Uid:573a3bf6-1ada-4555-95ca-10bdd7c33ecd,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:27.014822 containerd[2011]: time="2025-02-13T19:50:27.014341974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnqbj,Uid:6aab4654-3b26-4c92-869b-dcfe9750ca75,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:27.629847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263991425.mount: Deactivated successfully. Feb 13 19:50:27.640201 containerd[2011]: time="2025-02-13T19:50:27.638339517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:27.640968 containerd[2011]: time="2025-02-13T19:50:27.640894581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:27.643820 containerd[2011]: time="2025-02-13T19:50:27.643750966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:50:27.645617 containerd[2011]: time="2025-02-13T19:50:27.645347134Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:27.647091 containerd[2011]: time="2025-02-13T19:50:27.646829830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:50:27.656831 containerd[2011]: time="2025-02-13T19:50:27.656551642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:27.661736 containerd[2011]: time="2025-02-13T19:50:27.660902218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.022972ms" Feb 13 19:50:27.666257 containerd[2011]: time="2025-02-13T19:50:27.666137338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 651.628468ms" Feb 13 19:50:27.671212 kubelet[2468]: E0213 19:50:27.671139 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:27.941556 containerd[2011]: time="2025-02-13T19:50:27.940001555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:27.941834 containerd[2011]: time="2025-02-13T19:50:27.941524043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:27.941834 containerd[2011]: time="2025-02-13T19:50:27.941574611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.942968 containerd[2011]: time="2025-02-13T19:50:27.938187899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:27.943089 containerd[2011]: time="2025-02-13T19:50:27.942818591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:27.943089 containerd[2011]: time="2025-02-13T19:50:27.942888095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.946555 containerd[2011]: time="2025-02-13T19:50:27.946315895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.946907 containerd[2011]: time="2025-02-13T19:50:27.946777415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:28.147761 systemd[1]: Started cri-containerd-35dcdd950e80c17912d9a1e58ba31d367954c46db2a6fa1f040416a8915d700f.scope - libcontainer container 35dcdd950e80c17912d9a1e58ba31d367954c46db2a6fa1f040416a8915d700f. Feb 13 19:50:28.163760 systemd[1]: Started cri-containerd-27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc.scope - libcontainer container 27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc. Feb 13 19:50:28.227835 containerd[2011]: time="2025-02-13T19:50:28.227370896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xnqbj,Uid:6aab4654-3b26-4c92-869b-dcfe9750ca75,Namespace:kube-system,Attempt:0,} returns sandbox id \"35dcdd950e80c17912d9a1e58ba31d367954c46db2a6fa1f040416a8915d700f\"" Feb 13 19:50:28.234378 containerd[2011]: time="2025-02-13T19:50:28.234003632Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:50:28.243443 containerd[2011]: time="2025-02-13T19:50:28.243351500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dh7bc,Uid:573a3bf6-1ada-4555-95ca-10bdd7c33ecd,Namespace:calico-system,Attempt:0,} returns sandbox id \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\"" Feb 13 19:50:28.672446 kubelet[2468]: E0213 19:50:28.672226 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:28.894523 kubelet[2468]: E0213 19:50:28.893789 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:29.673623 kubelet[2468]: E0213 19:50:29.673523 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:29.684197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975924346.mount: Deactivated successfully. Feb 13 19:50:30.315472 containerd[2011]: time="2025-02-13T19:50:30.314968259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:30.316831 containerd[2011]: time="2025-02-13T19:50:30.316746323Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:50:30.318985 containerd[2011]: time="2025-02-13T19:50:30.318886499Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:30.324655 containerd[2011]: time="2025-02-13T19:50:30.324532751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:30.326981 containerd[2011]: time="2025-02-13T19:50:30.326805779Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 2.092710359s" Feb 13 19:50:30.326981 containerd[2011]: time="2025-02-13T19:50:30.326908895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:50:30.330445 containerd[2011]: time="2025-02-13T19:50:30.330334235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:50:30.333753 containerd[2011]: time="2025-02-13T19:50:30.333417575Z" level=info msg="CreateContainer within sandbox \"35dcdd950e80c17912d9a1e58ba31d367954c46db2a6fa1f040416a8915d700f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:50:30.366320 containerd[2011]: time="2025-02-13T19:50:30.366059459Z" level=info msg="CreateContainer within sandbox \"35dcdd950e80c17912d9a1e58ba31d367954c46db2a6fa1f040416a8915d700f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bbd824b1b2424ca339d5071310d4886dea96191f6719c621132a9fc6d948c241\"" Feb 13 19:50:30.369447 containerd[2011]: time="2025-02-13T19:50:30.368583803Z" level=info msg="StartContainer for \"bbd824b1b2424ca339d5071310d4886dea96191f6719c621132a9fc6d948c241\"" Feb 13 19:50:30.425760 systemd[1]: Started cri-containerd-bbd824b1b2424ca339d5071310d4886dea96191f6719c621132a9fc6d948c241.scope - libcontainer container bbd824b1b2424ca339d5071310d4886dea96191f6719c621132a9fc6d948c241. Feb 13 19:50:30.485888 containerd[2011]: time="2025-02-13T19:50:30.485820876Z" level=info msg="StartContainer for \"bbd824b1b2424ca339d5071310d4886dea96191f6719c621132a9fc6d948c241\" returns successfully" Feb 13 19:50:30.675485 kubelet[2468]: E0213 19:50:30.674089 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:30.898174 kubelet[2468]: E0213 19:50:30.898102 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:30.942284 kubelet[2468]: E0213 19:50:30.942046 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.942284 kubelet[2468]: W0213 19:50:30.942088 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.942284 kubelet[2468]: E0213 19:50:30.942124 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.943228 kubelet[2468]: E0213 19:50:30.942980 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.943228 kubelet[2468]: W0213 19:50:30.943027 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.943228 kubelet[2468]: E0213 19:50:30.943104 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.945507 kubelet[2468]: E0213 19:50:30.943881 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.945507 kubelet[2468]: W0213 19:50:30.943962 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.945507 kubelet[2468]: E0213 19:50:30.944038 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.945507 kubelet[2468]: E0213 19:50:30.944897 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.945507 kubelet[2468]: W0213 19:50:30.944931 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.945507 kubelet[2468]: E0213 19:50:30.945002 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.946184 kubelet[2468]: E0213 19:50:30.945975 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.946184 kubelet[2468]: W0213 19:50:30.946178 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.946477 kubelet[2468]: E0213 19:50:30.946262 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.947107 kubelet[2468]: E0213 19:50:30.947053 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.947236 kubelet[2468]: W0213 19:50:30.947120 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.947236 kubelet[2468]: E0213 19:50:30.947155 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.947838 kubelet[2468]: E0213 19:50:30.947785 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.947838 kubelet[2468]: W0213 19:50:30.947830 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.948079 kubelet[2468]: E0213 19:50:30.947866 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.948791 kubelet[2468]: E0213 19:50:30.948562 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.948791 kubelet[2468]: W0213 19:50:30.948602 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.948791 kubelet[2468]: E0213 19:50:30.948637 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.949318 kubelet[2468]: E0213 19:50:30.949102 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.949318 kubelet[2468]: W0213 19:50:30.949144 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.949318 kubelet[2468]: E0213 19:50:30.949180 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.951127 kubelet[2468]: E0213 19:50:30.949783 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.951127 kubelet[2468]: W0213 19:50:30.949823 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.951127 kubelet[2468]: E0213 19:50:30.949856 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.951127 kubelet[2468]: E0213 19:50:30.950299 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.951127 kubelet[2468]: W0213 19:50:30.950327 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.951127 kubelet[2468]: E0213 19:50:30.950356 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.951127 kubelet[2468]: E0213 19:50:30.950835 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.951127 kubelet[2468]: W0213 19:50:30.950864 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.951127 kubelet[2468]: E0213 19:50:30.950896 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.951862 kubelet[2468]: E0213 19:50:30.951455 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.951862 kubelet[2468]: W0213 19:50:30.951489 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.951862 kubelet[2468]: E0213 19:50:30.951520 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.952047 kubelet[2468]: E0213 19:50:30.951902 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.952047 kubelet[2468]: W0213 19:50:30.951926 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.952047 kubelet[2468]: E0213 19:50:30.951954 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.952494 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.954816 kubelet[2468]: W0213 19:50:30.952538 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.952573 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.952987 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.954816 kubelet[2468]: W0213 19:50:30.953014 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.953046 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.953480 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.954816 kubelet[2468]: W0213 19:50:30.953508 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.953538 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.954816 kubelet[2468]: E0213 19:50:30.954016 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.955623 kubelet[2468]: W0213 19:50:30.954044 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.955623 kubelet[2468]: E0213 19:50:30.954074 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.955623 kubelet[2468]: E0213 19:50:30.954535 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.955623 kubelet[2468]: W0213 19:50:30.954562 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.955623 kubelet[2468]: E0213 19:50:30.954620 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.956969 kubelet[2468]: E0213 19:50:30.956546 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.956969 kubelet[2468]: W0213 19:50:30.956582 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.956969 kubelet[2468]: E0213 19:50:30.956618 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.957505 kubelet[2468]: E0213 19:50:30.957465 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.957667 kubelet[2468]: W0213 19:50:30.957635 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.957966 kubelet[2468]: E0213 19:50:30.957827 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.958877 kubelet[2468]: E0213 19:50:30.958659 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.958877 kubelet[2468]: W0213 19:50:30.958698 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.958877 kubelet[2468]: E0213 19:50:30.958805 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.960142 kubelet[2468]: E0213 19:50:30.959806 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.960142 kubelet[2468]: W0213 19:50:30.959847 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.960142 kubelet[2468]: E0213 19:50:30.959909 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.960918 kubelet[2468]: E0213 19:50:30.960384 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.960918 kubelet[2468]: W0213 19:50:30.960457 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.960918 kubelet[2468]: E0213 19:50:30.960493 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.961291 kubelet[2468]: E0213 19:50:30.961259 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.961578 kubelet[2468]: W0213 19:50:30.961451 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.961578 kubelet[2468]: E0213 19:50:30.961520 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.962477 kubelet[2468]: E0213 19:50:30.962248 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.962477 kubelet[2468]: W0213 19:50:30.962284 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.962477 kubelet[2468]: E0213 19:50:30.962340 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.963464 kubelet[2468]: E0213 19:50:30.963173 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.963464 kubelet[2468]: W0213 19:50:30.963207 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.963464 kubelet[2468]: E0213 19:50:30.963263 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.964335 kubelet[2468]: E0213 19:50:30.964070 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.964335 kubelet[2468]: W0213 19:50:30.964102 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.964335 kubelet[2468]: E0213 19:50:30.964153 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.966117 kubelet[2468]: E0213 19:50:30.965555 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.966117 kubelet[2468]: W0213 19:50:30.965595 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.966117 kubelet[2468]: E0213 19:50:30.965725 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.967008 kubelet[2468]: E0213 19:50:30.966967 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.967238 kubelet[2468]: W0213 19:50:30.967200 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.968019 kubelet[2468]: E0213 19:50:30.967380 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.968365 kubelet[2468]: E0213 19:50:30.968329 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.968947 kubelet[2468]: W0213 19:50:30.968897 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.969229 kubelet[2468]: E0213 19:50:30.969176 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.969833 kubelet[2468]: E0213 19:50:30.969795 2468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.970086 kubelet[2468]: W0213 19:50:30.969978 2468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.970086 kubelet[2468]: E0213 19:50:30.970023 2468 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:31.628068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765950984.mount: Deactivated successfully. Feb 13 19:50:31.676996 kubelet[2468]: E0213 19:50:31.676939 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:31.780334 containerd[2011]: time="2025-02-13T19:50:31.780240794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:31.782437 containerd[2011]: time="2025-02-13T19:50:31.782307206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:50:31.784046 containerd[2011]: time="2025-02-13T19:50:31.783942674Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:31.788482 containerd[2011]: time="2025-02-13T19:50:31.788328026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:31.790568 containerd[2011]: time="2025-02-13T19:50:31.790275686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.459818595s" Feb 13 19:50:31.790568 containerd[2011]: time="2025-02-13T19:50:31.790349954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:50:31.796176 containerd[2011]: time="2025-02-13T19:50:31.795960986Z" level=info msg="CreateContainer within sandbox \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:50:31.825006 containerd[2011]: time="2025-02-13T19:50:31.824817062Z" level=info msg="CreateContainer within sandbox \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a\"" Feb 13 19:50:31.826182 containerd[2011]: time="2025-02-13T19:50:31.826120922Z" level=info msg="StartContainer for \"28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a\"" Feb 13 19:50:31.881983 systemd[1]: Started cri-containerd-28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a.scope - libcontainer container 28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a. Feb 13 19:50:31.943446 containerd[2011]: time="2025-02-13T19:50:31.943025991Z" level=info msg="StartContainer for \"28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a\" returns successfully" Feb 13 19:50:31.966085 systemd[1]: cri-containerd-28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a.scope: Deactivated successfully. Feb 13 19:50:32.304989 containerd[2011]: time="2025-02-13T19:50:32.304457305Z" level=info msg="shim disconnected" id=28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a namespace=k8s.io Feb 13 19:50:32.304989 containerd[2011]: time="2025-02-13T19:50:32.304560121Z" level=warning msg="cleaning up after shim disconnected" id=28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a namespace=k8s.io Feb 13 19:50:32.304989 containerd[2011]: time="2025-02-13T19:50:32.304582909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:32.575954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28ec348b020972e436746c854389737f665d168a5a19159b54f1754f23cfd93a-rootfs.mount: Deactivated successfully. Feb 13 19:50:32.678274 kubelet[2468]: E0213 19:50:32.678185 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:32.894697 kubelet[2468]: E0213 19:50:32.893796 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:32.951086 containerd[2011]: time="2025-02-13T19:50:32.951021280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:50:32.975863 kubelet[2468]: I0213 19:50:32.975742 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xnqbj" podStartSLOduration=5.879864949 podStartE2EDuration="7.975718288s" podCreationTimestamp="2025-02-13 19:50:25 +0000 UTC" firstStartedPulling="2025-02-13 19:50:28.23285312 +0000 UTC m=+6.030576271" lastFinishedPulling="2025-02-13 19:50:30.328706459 +0000 UTC m=+8.126429610" observedRunningTime="2025-02-13 19:50:30.956279546 +0000 UTC m=+8.754002733" watchObservedRunningTime="2025-02-13 19:50:32.975718288 +0000 UTC m=+10.773441451" Feb 13 19:50:33.678450 kubelet[2468]: E0213 19:50:33.678357 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:34.679451 kubelet[2468]: E0213 19:50:34.679343 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:34.897444 kubelet[2468]: E0213 19:50:34.895942 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:35.679973 kubelet[2468]: E0213 19:50:35.679846 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:36.675682 containerd[2011]: time="2025-02-13T19:50:36.675584346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:36.677435 containerd[2011]: time="2025-02-13T19:50:36.677323086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:50:36.678751 containerd[2011]: time="2025-02-13T19:50:36.678674262Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:36.681035 kubelet[2468]: E0213 19:50:36.680925 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:36.683687 containerd[2011]: time="2025-02-13T19:50:36.683567526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:36.685776 containerd[2011]: time="2025-02-13T19:50:36.685537086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.734449446s" Feb 13 19:50:36.685776 containerd[2011]: time="2025-02-13T19:50:36.685611318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:50:36.690065 containerd[2011]: time="2025-02-13T19:50:36.689989254Z" level=info msg="CreateContainer within sandbox \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:50:36.709878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040298791.mount: Deactivated successfully. Feb 13 19:50:36.715239 containerd[2011]: time="2025-02-13T19:50:36.715044727Z" level=info msg="CreateContainer within sandbox \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839\"" Feb 13 19:50:36.718476 containerd[2011]: time="2025-02-13T19:50:36.716277403Z" level=info msg="StartContainer for \"f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839\"" Feb 13 19:50:36.776901 systemd[1]: Started cri-containerd-f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839.scope - libcontainer container f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839. Feb 13 19:50:36.835372 containerd[2011]: time="2025-02-13T19:50:36.835312099Z" level=info msg="StartContainer for \"f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839\" returns successfully" Feb 13 19:50:36.893615 kubelet[2468]: E0213 19:50:36.893559 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:37.682109 kubelet[2468]: E0213 19:50:37.682031 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:37.830953 containerd[2011]: time="2025-02-13T19:50:37.830873096Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:50:37.835255 systemd[1]: cri-containerd-f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839.scope: Deactivated successfully. Feb 13 19:50:37.876101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839-rootfs.mount: Deactivated successfully. Feb 13 19:50:37.909527 kubelet[2468]: I0213 19:50:37.909156 2468 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:50:38.675970 containerd[2011]: time="2025-02-13T19:50:38.675822344Z" level=info msg="shim disconnected" id=f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839 namespace=k8s.io Feb 13 19:50:38.675970 containerd[2011]: time="2025-02-13T19:50:38.675901352Z" level=warning msg="cleaning up after shim disconnected" id=f0a38994cee358a16c3924494833c778af46f0f9d14a9cc8f063a1148b133839 namespace=k8s.io Feb 13 19:50:38.675970 containerd[2011]: time="2025-02-13T19:50:38.675922556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:38.682828 kubelet[2468]: E0213 19:50:38.682775 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:38.906065 systemd[1]: Created slice kubepods-besteffort-pod7291b7b5_988a_4b1a_bff8_f08c641e7de1.slice - libcontainer container kubepods-besteffort-pod7291b7b5_988a_4b1a_bff8_f08c641e7de1.slice. Feb 13 19:50:38.911003 containerd[2011]: time="2025-02-13T19:50:38.910908081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wdmb,Uid:7291b7b5-988a-4b1a-bff8-f08c641e7de1,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:38.979528 containerd[2011]: time="2025-02-13T19:50:38.979332718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:50:39.047778 containerd[2011]: time="2025-02-13T19:50:39.047378166Z" level=error msg="Failed to destroy network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:39.050364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4-shm.mount: Deactivated successfully. Feb 13 19:50:39.051351 containerd[2011]: time="2025-02-13T19:50:39.051225306Z" level=error msg="encountered an error cleaning up failed sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:39.051628 containerd[2011]: time="2025-02-13T19:50:39.051468162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wdmb,Uid:7291b7b5-988a-4b1a-bff8-f08c641e7de1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:39.052005 kubelet[2468]: E0213 19:50:39.051928 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:39.052864 kubelet[2468]: E0213 19:50:39.052040 2468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:39.052864 kubelet[2468]: E0213 19:50:39.052078 2468 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wdmb" Feb 13 19:50:39.052864 kubelet[2468]: E0213 19:50:39.052165 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7wdmb_calico-system(7291b7b5-988a-4b1a-bff8-f08c641e7de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7wdmb_calico-system(7291b7b5-988a-4b1a-bff8-f08c641e7de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:39.684689 kubelet[2468]: E0213 19:50:39.684618 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:39.980251 kubelet[2468]: I0213 19:50:39.980113 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:50:39.982132 containerd[2011]: time="2025-02-13T19:50:39.982033955Z" level=info msg="StopPodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\"" Feb 13 19:50:39.982875 containerd[2011]: time="2025-02-13T19:50:39.982311899Z" level=info msg="Ensure that sandbox 7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4 in task-service has been cleanup successfully" Feb 13 19:50:40.032595 containerd[2011]: time="2025-02-13T19:50:40.032483059Z" level=error msg="StopPodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" failed" error="failed to destroy network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:40.033292 kubelet[2468]: E0213 19:50:40.032819 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:50:40.033292 kubelet[2468]: E0213 19:50:40.032903 2468 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4"} Feb 13 19:50:40.033292 kubelet[2468]: E0213 19:50:40.032995 2468 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:40.033292 kubelet[2468]: E0213 19:50:40.033038 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7291b7b5-988a-4b1a-bff8-f08c641e7de1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wdmb" podUID="7291b7b5-988a-4b1a-bff8-f08c641e7de1" Feb 13 19:50:40.685316 kubelet[2468]: E0213 19:50:40.685166 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:41.686174 kubelet[2468]: E0213 19:50:41.686071 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:42.686633 kubelet[2468]: E0213 19:50:42.686435 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:43.547014 systemd[1]: Created slice kubepods-besteffort-pod829d8796_b4ee_4162_b0e9_a58e535f19b4.slice - libcontainer container kubepods-besteffort-pod829d8796_b4ee_4162_b0e9_a58e535f19b4.slice. Feb 13 19:50:43.651204 kubelet[2468]: I0213 19:50:43.651020 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtm9p\" (UniqueName: \"kubernetes.io/projected/829d8796-b4ee-4162-b0e9-a58e535f19b4-kube-api-access-dtm9p\") pod \"nginx-deployment-8587fbcb89-9gxc6\" (UID: \"829d8796-b4ee-4162-b0e9-a58e535f19b4\") " pod="default/nginx-deployment-8587fbcb89-9gxc6" Feb 13 19:50:43.687734 kubelet[2468]: E0213 19:50:43.687672 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:43.859545 containerd[2011]: time="2025-02-13T19:50:43.859334162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9gxc6,Uid:829d8796-b4ee-4162-b0e9-a58e535f19b4,Namespace:default,Attempt:0,}" Feb 13 19:50:44.102050 containerd[2011]: time="2025-02-13T19:50:44.101269859Z" level=error msg="Failed to destroy network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:44.105675 containerd[2011]: time="2025-02-13T19:50:44.105502895Z" level=error msg="encountered an error cleaning up failed sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:44.105839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00-shm.mount: Deactivated successfully. Feb 13 19:50:44.109120 containerd[2011]: time="2025-02-13T19:50:44.105633047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9gxc6,Uid:829d8796-b4ee-4162-b0e9-a58e535f19b4,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:44.109699 kubelet[2468]: E0213 19:50:44.109496 2468 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:44.109699 kubelet[2468]: E0213 19:50:44.109670 2468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9gxc6" Feb 13 19:50:44.109933 kubelet[2468]: E0213 19:50:44.109729 2468 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9gxc6" Feb 13 19:50:44.110698 kubelet[2468]: E0213 19:50:44.110191 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9gxc6_default(829d8796-b4ee-4162-b0e9-a58e535f19b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9gxc6_default(829d8796-b4ee-4162-b0e9-a58e535f19b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9gxc6" podUID="829d8796-b4ee-4162-b0e9-a58e535f19b4" Feb 13 19:50:44.669022 kubelet[2468]: E0213 19:50:44.668975 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:44.687999 kubelet[2468]: E0213 19:50:44.687860 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:44.997776 kubelet[2468]: I0213 19:50:44.997631 2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:50:44.999453 containerd[2011]: time="2025-02-13T19:50:44.998899264Z" level=info msg="StopPodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\"" Feb 13 19:50:44.999453 containerd[2011]: time="2025-02-13T19:50:44.999196492Z" level=info msg="Ensure that sandbox 7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00 in task-service has been cleanup successfully" Feb 13 19:50:45.061177 containerd[2011]: time="2025-02-13T19:50:45.060985524Z" level=error msg="StopPodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" failed" error="failed to destroy network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:45.061537 kubelet[2468]: E0213 19:50:45.061468 2468 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:50:45.061665 kubelet[2468]: E0213 19:50:45.061549 2468 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00"} Feb 13 19:50:45.061665 kubelet[2468]: E0213 19:50:45.061607 2468 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"829d8796-b4ee-4162-b0e9-a58e535f19b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:45.061858 kubelet[2468]: E0213 19:50:45.061653 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"829d8796-b4ee-4162-b0e9-a58e535f19b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9gxc6" podUID="829d8796-b4ee-4162-b0e9-a58e535f19b4" Feb 13 19:50:45.415510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701926033.mount: Deactivated successfully. Feb 13 19:50:45.489453 containerd[2011]: time="2025-02-13T19:50:45.489211178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:45.491307 containerd[2011]: time="2025-02-13T19:50:45.491221250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:50:45.493763 containerd[2011]: time="2025-02-13T19:50:45.493683050Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:45.498306 containerd[2011]: time="2025-02-13T19:50:45.498221438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:45.500295 containerd[2011]: time="2025-02-13T19:50:45.500000114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.520573736s" Feb 13 19:50:45.500295 containerd[2011]: time="2025-02-13T19:50:45.500088554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:50:45.525453 containerd[2011]: time="2025-02-13T19:50:45.524283794Z" level=info msg="CreateContainer within sandbox \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:50:45.561564 containerd[2011]: time="2025-02-13T19:50:45.561304911Z" level=info msg="CreateContainer within sandbox \"27aed09550a2f8c014f3c0873fe07348e187816ca6b1846e5411bf478b99e6dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3d48d52052df0a696b248397de8b783cc1c70d7fee82993db548a9ddc3019c4\"" Feb 13 19:50:45.564785 containerd[2011]: time="2025-02-13T19:50:45.562703223Z" level=info msg="StartContainer for \"a3d48d52052df0a696b248397de8b783cc1c70d7fee82993db548a9ddc3019c4\"" Feb 13 19:50:45.565070 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:50:45.619717 systemd[1]: Started cri-containerd-a3d48d52052df0a696b248397de8b783cc1c70d7fee82993db548a9ddc3019c4.scope - libcontainer container a3d48d52052df0a696b248397de8b783cc1c70d7fee82993db548a9ddc3019c4. Feb 13 19:50:45.685009 containerd[2011]: time="2025-02-13T19:50:45.683526003Z" level=info msg="StartContainer for \"a3d48d52052df0a696b248397de8b783cc1c70d7fee82993db548a9ddc3019c4\" returns successfully" Feb 13 19:50:45.688347 kubelet[2468]: E0213 19:50:45.688280 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:45.795579 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:50:45.795954 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:50:46.688841 kubelet[2468]: E0213 19:50:46.688752 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:47.689932 kubelet[2468]: E0213 19:50:47.689862 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:47.714509 kernel: bpftool[3299]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:50:48.003847 systemd-networkd[1842]: vxlan.calico: Link UP Feb 13 19:50:48.003872 systemd-networkd[1842]: vxlan.calico: Gained carrier Feb 13 19:50:48.007319 (udev-worker)[3109]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:48.048577 (udev-worker)[3324]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:48.690325 kubelet[2468]: E0213 19:50:48.690266 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:49.566343 systemd-networkd[1842]: vxlan.calico: Gained IPv6LL Feb 13 19:50:49.691425 kubelet[2468]: E0213 19:50:49.691309 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:50.691742 kubelet[2468]: E0213 19:50:50.691667 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:51.600577 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.59.0:123 Feb 13 19:50:51.600713 ntpd[1990]: Listen normally on 8 vxlan.calico [fe80::6462:f8ff:fe30:7e9a%3]:123 Feb 13 19:50:51.601151 ntpd[1990]: 13 Feb 19:50:51 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.59.0:123 Feb 13 19:50:51.601151 ntpd[1990]: 13 Feb 19:50:51 ntpd[1990]: Listen normally on 8 vxlan.calico [fe80::6462:f8ff:fe30:7e9a%3]:123 Feb 13 19:50:51.691855 kubelet[2468]: E0213 19:50:51.691793 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:52.692807 kubelet[2468]: E0213 19:50:52.692737 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:53.693283 kubelet[2468]: E0213 19:50:53.693196 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:54.693986 kubelet[2468]: E0213 19:50:54.693904 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:54.894820 containerd[2011]: time="2025-02-13T19:50:54.894751177Z" level=info msg="StopPodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\"" Feb 13 19:50:54.995443 kubelet[2468]: I0213 19:50:54.994435 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dh7bc" podStartSLOduration=13.738330344 podStartE2EDuration="30.994358401s" podCreationTimestamp="2025-02-13 19:50:24 +0000 UTC" firstStartedPulling="2025-02-13 19:50:28.246672645 +0000 UTC m=+6.044395808" lastFinishedPulling="2025-02-13 19:50:45.502700714 +0000 UTC m=+23.300423865" observedRunningTime="2025-02-13 19:50:46.053284537 +0000 UTC m=+23.851007700" watchObservedRunningTime="2025-02-13 19:50:54.994358401 +0000 UTC m=+32.792081576" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:54.993 [INFO][3392] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:54.994 [INFO][3392] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" iface="eth0" netns="/var/run/netns/cni-c30e5a38-ab19-4e1b-7e7c-3f80d82e7b17" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:54.994 [INFO][3392] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" iface="eth0" netns="/var/run/netns/cni-c30e5a38-ab19-4e1b-7e7c-3f80d82e7b17" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:54.995 [INFO][3392] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" iface="eth0" netns="/var/run/netns/cni-c30e5a38-ab19-4e1b-7e7c-3f80d82e7b17" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:54.995 [INFO][3392] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:54.995 [INFO][3392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.044 [INFO][3398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.044 [INFO][3398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.044 [INFO][3398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.057 [WARNING][3398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.057 [INFO][3398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.059 [INFO][3398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:55.067737 containerd[2011]: 2025-02-13 19:50:55.064 [INFO][3392] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:50:55.069275 containerd[2011]: time="2025-02-13T19:50:55.068165806Z" level=info msg="TearDown network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" successfully" Feb 13 19:50:55.069275 containerd[2011]: time="2025-02-13T19:50:55.068225638Z" level=info msg="StopPodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" returns successfully" Feb 13 19:50:55.073043 containerd[2011]: time="2025-02-13T19:50:55.072335998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wdmb,Uid:7291b7b5-988a-4b1a-bff8-f08c641e7de1,Namespace:calico-system,Attempt:1,}" Feb 13 19:50:55.073828 systemd[1]: run-netns-cni\x2dc30e5a38\x2dab19\x2d4e1b\x2d7e7c\x2d3f80d82e7b17.mount: Deactivated successfully. Feb 13 19:50:55.296614 systemd-networkd[1842]: calidba7f56a17e: Link UP Feb 13 19:50:55.297053 systemd-networkd[1842]: calidba7f56a17e: Gained carrier Feb 13 19:50:55.299820 (udev-worker)[3424]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.171 [INFO][3405] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.175-k8s-csi--node--driver--7wdmb-eth0 csi-node-driver- calico-system 7291b7b5-988a-4b1a-bff8-f08c641e7de1 1048 0 2025-02-13 19:50:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.30.175 csi-node-driver-7wdmb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidba7f56a17e [] []}} ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.171 [INFO][3405] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.222 [INFO][3416] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" HandleID="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.241 [INFO][3416] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" HandleID="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.30.175", "pod":"csi-node-driver-7wdmb", "timestamp":"2025-02-13 19:50:55.221974978 +0000 UTC"}, Hostname:"172.31.30.175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.241 [INFO][3416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.241 [INFO][3416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.242 [INFO][3416] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.175' Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.245 [INFO][3416] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.251 [INFO][3416] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.258 [INFO][3416] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.261 [INFO][3416] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.265 [INFO][3416] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.265 [INFO][3416] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.267 [INFO][3416] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.275 [INFO][3416] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.286 [INFO][3416] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.1/26] block=192.168.59.0/26 handle="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.286 [INFO][3416] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.1/26] handle="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" host="172.31.30.175" Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.286 [INFO][3416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:55.325602 containerd[2011]: 2025-02-13 19:50:55.286 [INFO][3416] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.1/26] IPv6=[] ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" HandleID="k8s-pod-network.1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.330423 containerd[2011]: 2025-02-13 19:50:55.289 [INFO][3405] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-csi--node--driver--7wdmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7291b7b5-988a-4b1a-bff8-f08c641e7de1", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"", Pod:"csi-node-driver-7wdmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba7f56a17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:55.330423 containerd[2011]: 2025-02-13 19:50:55.289 [INFO][3405] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.1/32] ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.330423 containerd[2011]: 2025-02-13 19:50:55.289 [INFO][3405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidba7f56a17e ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.330423 containerd[2011]: 2025-02-13 19:50:55.295 [INFO][3405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.330423 containerd[2011]: 2025-02-13 19:50:55.300 [INFO][3405] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-csi--node--driver--7wdmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7291b7b5-988a-4b1a-bff8-f08c641e7de1", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f", Pod:"csi-node-driver-7wdmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba7f56a17e", MAC:"a2:42:7e:dc:57:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:55.330423 containerd[2011]: 2025-02-13 19:50:55.318 [INFO][3405] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f" Namespace="calico-system" Pod="csi-node-driver-7wdmb" WorkloadEndpoint="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:50:55.371885 containerd[2011]: time="2025-02-13T19:50:55.371597219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:55.372767 containerd[2011]: time="2025-02-13T19:50:55.372656003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:55.372874 containerd[2011]: time="2025-02-13T19:50:55.372814199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:55.373188 containerd[2011]: time="2025-02-13T19:50:55.373090055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:55.421813 systemd[1]: Started cri-containerd-1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f.scope - libcontainer container 1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f. Feb 13 19:50:55.464724 containerd[2011]: time="2025-02-13T19:50:55.464669232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wdmb,Uid:7291b7b5-988a-4b1a-bff8-f08c641e7de1,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f\"" Feb 13 19:50:55.470759 containerd[2011]: time="2025-02-13T19:50:55.470064108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:50:55.694916 kubelet[2468]: E0213 19:50:55.694833 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:56.695820 kubelet[2468]: E0213 19:50:56.695739 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:56.890476 containerd[2011]: time="2025-02-13T19:50:56.889887495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:56.892280 containerd[2011]: time="2025-02-13T19:50:56.892205739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:50:56.895299 containerd[2011]: time="2025-02-13T19:50:56.895166091Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:56.900203 containerd[2011]: time="2025-02-13T19:50:56.899639667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:56.901361 containerd[2011]: time="2025-02-13T19:50:56.901150827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.430662447s" Feb 13 19:50:56.901361 containerd[2011]: time="2025-02-13T19:50:56.901214295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:50:56.905006 containerd[2011]: time="2025-02-13T19:50:56.904807971Z" level=info msg="CreateContainer within sandbox \"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:50:56.932924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705547554.mount: Deactivated successfully. Feb 13 19:50:56.939920 containerd[2011]: time="2025-02-13T19:50:56.939819291Z" level=info msg="CreateContainer within sandbox \"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b20e495610f459e639361b514f571f8049362dcd4df79ebaff378d42e6a482fa\"" Feb 13 19:50:56.941287 containerd[2011]: time="2025-02-13T19:50:56.941190495Z" level=info msg="StartContainer for \"b20e495610f459e639361b514f571f8049362dcd4df79ebaff378d42e6a482fa\"" Feb 13 19:50:57.002750 systemd[1]: Started cri-containerd-b20e495610f459e639361b514f571f8049362dcd4df79ebaff378d42e6a482fa.scope - libcontainer container b20e495610f459e639361b514f571f8049362dcd4df79ebaff378d42e6a482fa. Feb 13 19:50:57.069237 containerd[2011]: time="2025-02-13T19:50:57.069124224Z" level=info msg="StartContainer for \"b20e495610f459e639361b514f571f8049362dcd4df79ebaff378d42e6a482fa\" returns successfully" Feb 13 19:50:57.072972 containerd[2011]: time="2025-02-13T19:50:57.072897708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:50:57.309742 systemd-networkd[1842]: calidba7f56a17e: Gained IPv6LL Feb 13 19:50:57.696029 kubelet[2468]: E0213 19:50:57.695938 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:57.895029 containerd[2011]: time="2025-02-13T19:50:57.894958348Z" level=info msg="StopPodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\"" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:57.983 [INFO][3531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:57.983 [INFO][3531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" iface="eth0" netns="/var/run/netns/cni-9418aaf6-1dc3-0abd-d6a6-1ff7d3c65dbb" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:57.983 [INFO][3531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" iface="eth0" netns="/var/run/netns/cni-9418aaf6-1dc3-0abd-d6a6-1ff7d3c65dbb" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:57.984 [INFO][3531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" iface="eth0" netns="/var/run/netns/cni-9418aaf6-1dc3-0abd-d6a6-1ff7d3c65dbb" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:57.984 [INFO][3531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:57.984 [INFO][3531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.025 [INFO][3537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.026 [INFO][3537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.026 [INFO][3537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.039 [WARNING][3537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.039 [INFO][3537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.041 [INFO][3537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:58.050473 containerd[2011]: 2025-02-13 19:50:58.044 [INFO][3531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:50:58.051682 containerd[2011]: time="2025-02-13T19:50:58.051642337Z" level=info msg="TearDown network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" successfully" Feb 13 19:50:58.051857 containerd[2011]: time="2025-02-13T19:50:58.051686029Z" level=info msg="StopPodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" returns successfully" Feb 13 19:50:58.054112 systemd[1]: run-netns-cni\x2d9418aaf6\x2d1dc3\x2d0abd\x2dd6a6\x2d1ff7d3c65dbb.mount: Deactivated successfully. Feb 13 19:50:58.055340 containerd[2011]: time="2025-02-13T19:50:58.054187537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9gxc6,Uid:829d8796-b4ee-4162-b0e9-a58e535f19b4,Namespace:default,Attempt:1,}" Feb 13 19:50:58.346101 systemd-networkd[1842]: cali302b67e54db: Link UP Feb 13 19:50:58.348027 systemd-networkd[1842]: cali302b67e54db: Gained carrier Feb 13 19:50:58.351729 (udev-worker)[3561]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.169 [INFO][3544] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0 nginx-deployment-8587fbcb89- default 829d8796-b4ee-4162-b0e9-a58e535f19b4 1065 0 2025-02-13 19:50:43 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.30.175 nginx-deployment-8587fbcb89-9gxc6 eth0 default [] [] [kns.default ksa.default.default] cali302b67e54db [] []}} ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.170 [INFO][3544] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.222 [INFO][3554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" HandleID="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.241 [INFO][3554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" HandleID="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f8dc0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.175", "pod":"nginx-deployment-8587fbcb89-9gxc6", "timestamp":"2025-02-13 19:50:58.222239353 +0000 UTC"}, Hostname:"172.31.30.175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.241 [INFO][3554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.241 [INFO][3554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.241 [INFO][3554] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.175' Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.246 [INFO][3554] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.255 [INFO][3554] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.271 [INFO][3554] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.278 [INFO][3554] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.287 [INFO][3554] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.287 [INFO][3554] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.292 [INFO][3554] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4 Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.307 [INFO][3554] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.332 [INFO][3554] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.2/26] block=192.168.59.0/26 handle="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.332 [INFO][3554] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.2/26] handle="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" host="172.31.30.175" Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.332 [INFO][3554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:58.381164 containerd[2011]: 2025-02-13 19:50:58.332 [INFO][3554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.2/26] IPv6=[] ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" HandleID="k8s-pod-network.7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.382840 containerd[2011]: 2025-02-13 19:50:58.336 [INFO][3544] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"829d8796-b4ee-4162-b0e9-a58e535f19b4", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-9gxc6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali302b67e54db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:58.382840 containerd[2011]: 2025-02-13 19:50:58.336 [INFO][3544] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.2/32] ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.382840 containerd[2011]: 2025-02-13 19:50:58.336 [INFO][3544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali302b67e54db ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.382840 containerd[2011]: 2025-02-13 19:50:58.347 [INFO][3544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.382840 containerd[2011]: 2025-02-13 19:50:58.348 [INFO][3544] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"829d8796-b4ee-4162-b0e9-a58e535f19b4", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4", Pod:"nginx-deployment-8587fbcb89-9gxc6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali302b67e54db", MAC:"b6:94:1e:51:c7:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:58.382840 containerd[2011]: 2025-02-13 19:50:58.374 [INFO][3544] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4" Namespace="default" Pod="nginx-deployment-8587fbcb89-9gxc6" WorkloadEndpoint="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:50:58.478941 containerd[2011]: time="2025-02-13T19:50:58.477286551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:58.478941 containerd[2011]: time="2025-02-13T19:50:58.477776895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:58.481091 containerd[2011]: time="2025-02-13T19:50:58.479084859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:58.481091 containerd[2011]: time="2025-02-13T19:50:58.479812779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:58.534856 systemd[1]: Started cri-containerd-7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4.scope - libcontainer container 7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4. Feb 13 19:50:58.632030 containerd[2011]: time="2025-02-13T19:50:58.631332747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9gxc6,Uid:829d8796-b4ee-4162-b0e9-a58e535f19b4,Namespace:default,Attempt:1,} returns sandbox id \"7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4\"" Feb 13 19:50:58.696323 kubelet[2468]: E0213 19:50:58.696259 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:58.781255 containerd[2011]: time="2025-02-13T19:50:58.779549560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:58.784036 containerd[2011]: time="2025-02-13T19:50:58.783966508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:50:58.787915 containerd[2011]: time="2025-02-13T19:50:58.787775008Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:58.798340 containerd[2011]: time="2025-02-13T19:50:58.798213400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:58.801116 containerd[2011]: time="2025-02-13T19:50:58.800225164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.727249s" Feb 13 19:50:58.801116 containerd[2011]: time="2025-02-13T19:50:58.800310088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:50:58.803973 containerd[2011]: time="2025-02-13T19:50:58.803649052Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:50:58.805788 containerd[2011]: time="2025-02-13T19:50:58.805497556Z" level=info msg="CreateContainer within sandbox \"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:50:58.836933 containerd[2011]: time="2025-02-13T19:50:58.836848816Z" level=info msg="CreateContainer within sandbox \"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e40f0687e33f6d6bbebc4f9d854adcc1d6c8151308703fd5a582d1c7ca20cb19\"" Feb 13 19:50:58.839465 containerd[2011]: time="2025-02-13T19:50:58.838174444Z" level=info msg="StartContainer for \"e40f0687e33f6d6bbebc4f9d854adcc1d6c8151308703fd5a582d1c7ca20cb19\"" Feb 13 19:50:58.894744 systemd[1]: Started cri-containerd-e40f0687e33f6d6bbebc4f9d854adcc1d6c8151308703fd5a582d1c7ca20cb19.scope - libcontainer container e40f0687e33f6d6bbebc4f9d854adcc1d6c8151308703fd5a582d1c7ca20cb19. Feb 13 19:50:58.957536 containerd[2011]: time="2025-02-13T19:50:58.957279041Z" level=info msg="StartContainer for \"e40f0687e33f6d6bbebc4f9d854adcc1d6c8151308703fd5a582d1c7ca20cb19\" returns successfully" Feb 13 19:50:59.084857 kubelet[2468]: I0213 19:50:59.084675 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7wdmb" podStartSLOduration=30.750590642 podStartE2EDuration="34.084652922s" podCreationTimestamp="2025-02-13 19:50:25 +0000 UTC" firstStartedPulling="2025-02-13 19:50:55.468212088 +0000 UTC m=+33.265935251" lastFinishedPulling="2025-02-13 19:50:58.802274272 +0000 UTC m=+36.599997531" observedRunningTime="2025-02-13 19:50:59.084578774 +0000 UTC m=+36.882301961" watchObservedRunningTime="2025-02-13 19:50:59.084652922 +0000 UTC m=+36.882376085" Feb 13 19:50:59.697303 kubelet[2468]: E0213 19:50:59.697193 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:59.842179 kubelet[2468]: I0213 19:50:59.841902 2468 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:50:59.842179 kubelet[2468]: I0213 19:50:59.841950 2468 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:50:59.976005 update_engine[1995]: I20250213 19:50:59.975790 1995 update_attempter.cc:509] Updating boot flags... Feb 13 19:51:00.081563 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3674) Feb 13 19:51:00.254712 systemd-networkd[1842]: cali302b67e54db: Gained IPv6LL Feb 13 19:51:00.664546 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3679) Feb 13 19:51:00.698225 kubelet[2468]: E0213 19:51:00.698109 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:01.699217 kubelet[2468]: E0213 19:51:01.699153 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:02.600604 ntpd[1990]: Listen normally on 9 calidba7f56a17e [fe80::ecee:eeff:feee:eeee%6]:123 Feb 13 19:51:02.600719 ntpd[1990]: Listen normally on 10 cali302b67e54db [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 19:51:02.601195 ntpd[1990]: 13 Feb 19:51:02 ntpd[1990]: Listen normally on 9 calidba7f56a17e [fe80::ecee:eeff:feee:eeee%6]:123 Feb 13 19:51:02.601195 ntpd[1990]: 13 Feb 19:51:02 ntpd[1990]: Listen normally on 10 cali302b67e54db [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 19:51:02.699470 kubelet[2468]: E0213 19:51:02.699312 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:02.797541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723976.mount: Deactivated successfully. Feb 13 19:51:03.700517 kubelet[2468]: E0213 19:51:03.700440 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:04.303842 containerd[2011]: time="2025-02-13T19:51:04.303780992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:04.306825 containerd[2011]: time="2025-02-13T19:51:04.306759728Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:51:04.309067 containerd[2011]: time="2025-02-13T19:51:04.308984456Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:04.321434 containerd[2011]: time="2025-02-13T19:51:04.319839032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:04.323951 containerd[2011]: time="2025-02-13T19:51:04.323881172Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 5.520158236s" Feb 13 19:51:04.324179 containerd[2011]: time="2025-02-13T19:51:04.324141224Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:51:04.330035 containerd[2011]: time="2025-02-13T19:51:04.329981048Z" level=info msg="CreateContainer within sandbox \"7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:51:04.358320 containerd[2011]: time="2025-02-13T19:51:04.358261208Z" level=info msg="CreateContainer within sandbox \"7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"22ed655255fc2c1032e9a2b943fe4fb3946db7ba263500ca99754dfef7de51cf\"" Feb 13 19:51:04.359268 containerd[2011]: time="2025-02-13T19:51:04.359223140Z" level=info msg="StartContainer for \"22ed655255fc2c1032e9a2b943fe4fb3946db7ba263500ca99754dfef7de51cf\"" Feb 13 19:51:04.412716 systemd[1]: Started cri-containerd-22ed655255fc2c1032e9a2b943fe4fb3946db7ba263500ca99754dfef7de51cf.scope - libcontainer container 22ed655255fc2c1032e9a2b943fe4fb3946db7ba263500ca99754dfef7de51cf. Feb 13 19:51:04.458014 containerd[2011]: time="2025-02-13T19:51:04.457906568Z" level=info msg="StartContainer for \"22ed655255fc2c1032e9a2b943fe4fb3946db7ba263500ca99754dfef7de51cf\" returns successfully" Feb 13 19:51:04.669030 kubelet[2468]: E0213 19:51:04.668966 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:04.701447 kubelet[2468]: E0213 19:51:04.701354 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:05.117864 kubelet[2468]: I0213 19:51:05.117767 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-9gxc6" podStartSLOduration=16.430411839 podStartE2EDuration="22.11774738s" podCreationTimestamp="2025-02-13 19:50:43 +0000 UTC" firstStartedPulling="2025-02-13 19:50:58.639265071 +0000 UTC m=+36.436988234" lastFinishedPulling="2025-02-13 19:51:04.326600624 +0000 UTC m=+42.124323775" observedRunningTime="2025-02-13 19:51:05.117488708 +0000 UTC m=+42.915211871" watchObservedRunningTime="2025-02-13 19:51:05.11774738 +0000 UTC m=+42.915470543" Feb 13 19:51:05.701963 kubelet[2468]: E0213 19:51:05.701900 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:06.702969 kubelet[2468]: E0213 19:51:06.702905 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:07.703792 kubelet[2468]: E0213 19:51:07.703710 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:08.704071 kubelet[2468]: E0213 19:51:08.704012 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:09.704211 kubelet[2468]: E0213 19:51:09.704148 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:10.688621 systemd[1]: Created slice kubepods-besteffort-pod476563ee_119e_40e3_a6c8_149616e35b5f.slice - libcontainer container kubepods-besteffort-pod476563ee_119e_40e3_a6c8_149616e35b5f.slice. Feb 13 19:51:10.705209 kubelet[2468]: E0213 19:51:10.705124 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:10.726958 kubelet[2468]: I0213 19:51:10.726836 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpq7\" (UniqueName: \"kubernetes.io/projected/476563ee-119e-40e3-a6c8-149616e35b5f-kube-api-access-qnpq7\") pod \"nfs-server-provisioner-0\" (UID: \"476563ee-119e-40e3-a6c8-149616e35b5f\") " pod="default/nfs-server-provisioner-0" Feb 13 19:51:10.726958 kubelet[2468]: I0213 19:51:10.726914 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/476563ee-119e-40e3-a6c8-149616e35b5f-data\") pod \"nfs-server-provisioner-0\" (UID: \"476563ee-119e-40e3-a6c8-149616e35b5f\") " pod="default/nfs-server-provisioner-0" Feb 13 19:51:10.994350 containerd[2011]: time="2025-02-13T19:51:10.993942905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:476563ee-119e-40e3-a6c8-149616e35b5f,Namespace:default,Attempt:0,}" Feb 13 19:51:11.238163 systemd-networkd[1842]: cali60e51b789ff: Link UP Feb 13 19:51:11.241242 systemd-networkd[1842]: cali60e51b789ff: Gained carrier Feb 13 19:51:11.245118 (udev-worker)[3984]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.092 [INFO][3966] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.175-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 476563ee-119e-40e3-a6c8-149616e35b5f 1141 0 2025-02-13 19:51:10 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.30.175 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.093 [INFO][3966] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.150 [INFO][3978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" HandleID="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Workload="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.171 [INFO][3978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" HandleID="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Workload="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030bcf0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.175", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:51:11.150680786 +0000 UTC"}, Hostname:"172.31.30.175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.171 [INFO][3978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.171 [INFO][3978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.171 [INFO][3978] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.175' Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.175 [INFO][3978] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.186 [INFO][3978] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.198 [INFO][3978] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.201 [INFO][3978] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.205 [INFO][3978] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.205 [INFO][3978] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.207 [INFO][3978] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090 Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.215 [INFO][3978] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.228 [INFO][3978] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.3/26] block=192.168.59.0/26 handle="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.228 [INFO][3978] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.3/26] handle="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" host="172.31.30.175" Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.228 [INFO][3978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:11.265163 containerd[2011]: 2025-02-13 19:51:11.228 [INFO][3978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.3/26] IPv6=[] ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" HandleID="k8s-pod-network.92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Workload="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.268880 containerd[2011]: 2025-02-13 19:51:11.232 [INFO][3966] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"476563ee-119e-40e3-a6c8-149616e35b5f", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:11.268880 containerd[2011]: 2025-02-13 19:51:11.232 [INFO][3966] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.3/32] ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.268880 containerd[2011]: 2025-02-13 19:51:11.232 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.268880 containerd[2011]: 2025-02-13 19:51:11.237 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.270586 containerd[2011]: 2025-02-13 19:51:11.239 [INFO][3966] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"476563ee-119e-40e3-a6c8-149616e35b5f", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"82:75:00:9d:0c:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:11.270586 containerd[2011]: 2025-02-13 19:51:11.262 [INFO][3966] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.175-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:11.309928 containerd[2011]: time="2025-02-13T19:51:11.309771854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:11.310963 containerd[2011]: time="2025-02-13T19:51:11.310869578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:11.311091 containerd[2011]: time="2025-02-13T19:51:11.310997930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:11.311536 containerd[2011]: time="2025-02-13T19:51:11.311433794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:11.354228 systemd[1]: Started cri-containerd-92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090.scope - libcontainer container 92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090. Feb 13 19:51:11.416381 containerd[2011]: time="2025-02-13T19:51:11.416322555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:476563ee-119e-40e3-a6c8-149616e35b5f,Namespace:default,Attempt:0,} returns sandbox id \"92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090\"" Feb 13 19:51:11.420010 containerd[2011]: time="2025-02-13T19:51:11.419956203Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:51:11.705716 kubelet[2468]: E0213 19:51:11.705651 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:12.706263 kubelet[2468]: E0213 19:51:12.706180 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:13.246105 systemd-networkd[1842]: cali60e51b789ff: Gained IPv6LL Feb 13 19:51:13.707246 kubelet[2468]: E0213 19:51:13.707082 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:13.957697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639622862.mount: Deactivated successfully. Feb 13 19:51:14.707632 kubelet[2468]: E0213 19:51:14.707565 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:15.600816 ntpd[1990]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:51:15.601763 ntpd[1990]: 13 Feb 19:51:15 ntpd[1990]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:51:15.707793 kubelet[2468]: E0213 19:51:15.707748 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:16.709635 kubelet[2468]: E0213 19:51:16.709570 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:16.968102 containerd[2011]: time="2025-02-13T19:51:16.967760375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:16.970124 containerd[2011]: time="2025-02-13T19:51:16.970061735Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 19:51:16.971183 containerd[2011]: time="2025-02-13T19:51:16.971079107Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:16.976532 containerd[2011]: time="2025-02-13T19:51:16.976471739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:16.985443 containerd[2011]: time="2025-02-13T19:51:16.983176187Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.5628428s" Feb 13 19:51:16.985443 containerd[2011]: time="2025-02-13T19:51:16.983269115Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:51:16.992357 containerd[2011]: time="2025-02-13T19:51:16.992293343Z" level=info msg="CreateContainer within sandbox \"92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:51:17.023249 containerd[2011]: time="2025-02-13T19:51:17.023189815Z" level=info msg="CreateContainer within sandbox \"92a1fff4fd67433a20d6cb73111ed0ce5b5e596bff32e9cfd93058d301938090\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"823767a21ff8cacbd7580de3a22c59167c25d7031a25315c24ed873481687e7d\"" Feb 13 19:51:17.024330 containerd[2011]: time="2025-02-13T19:51:17.024279367Z" level=info msg="StartContainer for \"823767a21ff8cacbd7580de3a22c59167c25d7031a25315c24ed873481687e7d\"" Feb 13 19:51:17.083758 systemd[1]: Started cri-containerd-823767a21ff8cacbd7580de3a22c59167c25d7031a25315c24ed873481687e7d.scope - libcontainer container 823767a21ff8cacbd7580de3a22c59167c25d7031a25315c24ed873481687e7d. Feb 13 19:51:17.131605 containerd[2011]: time="2025-02-13T19:51:17.131538079Z" level=info msg="StartContainer for \"823767a21ff8cacbd7580de3a22c59167c25d7031a25315c24ed873481687e7d\" returns successfully" Feb 13 19:51:17.710713 kubelet[2468]: E0213 19:51:17.710642 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:18.711510 kubelet[2468]: E0213 19:51:18.711428 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:19.711959 kubelet[2468]: E0213 19:51:19.711885 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:20.712086 kubelet[2468]: E0213 19:51:20.712020 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:21.712829 kubelet[2468]: E0213 19:51:21.712763 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:22.713621 kubelet[2468]: E0213 19:51:22.713556 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:23.714525 kubelet[2468]: E0213 19:51:23.714456 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:24.669017 kubelet[2468]: E0213 19:51:24.668937 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:24.714478 containerd[2011]: time="2025-02-13T19:51:24.714384017Z" level=info msg="StopPodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\"" Feb 13 19:51:24.715081 kubelet[2468]: E0213 19:51:24.714951 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.781 [WARNING][4143] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-csi--node--driver--7wdmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7291b7b5-988a-4b1a-bff8-f08c641e7de1", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f", Pod:"csi-node-driver-7wdmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba7f56a17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.781 [INFO][4143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.782 [INFO][4143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" iface="eth0" netns="" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.782 [INFO][4143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.782 [INFO][4143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.823 [INFO][4149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.824 [INFO][4149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.824 [INFO][4149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.838 [WARNING][4149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.838 [INFO][4149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.840 [INFO][4149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:24.845062 containerd[2011]: 2025-02-13 19:51:24.842 [INFO][4143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.845966 containerd[2011]: time="2025-02-13T19:51:24.845156970Z" level=info msg="TearDown network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" successfully" Feb 13 19:51:24.845966 containerd[2011]: time="2025-02-13T19:51:24.845204094Z" level=info msg="StopPodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" returns successfully" Feb 13 19:51:24.846885 containerd[2011]: time="2025-02-13T19:51:24.846826434Z" level=info msg="RemovePodSandbox for \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\"" Feb 13 19:51:24.847107 containerd[2011]: time="2025-02-13T19:51:24.847013538Z" level=info msg="Forcibly stopping sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\"" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.921 [WARNING][4167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-csi--node--driver--7wdmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7291b7b5-988a-4b1a-bff8-f08c641e7de1", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"1b0f76804927109ee47a344a476861d89b44c344f6208e907b61f46a297c8c9f", Pod:"csi-node-driver-7wdmb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidba7f56a17e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.921 [INFO][4167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.921 [INFO][4167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" iface="eth0" netns="" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.921 [INFO][4167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.922 [INFO][4167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.967 [INFO][4175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.967 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.967 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.980 [WARNING][4175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.980 [INFO][4175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" HandleID="k8s-pod-network.7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Workload="172.31.30.175-k8s-csi--node--driver--7wdmb-eth0" Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.984 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:24.995781 containerd[2011]: 2025-02-13 19:51:24.990 [INFO][4167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4" Feb 13 19:51:24.995781 containerd[2011]: time="2025-02-13T19:51:24.993972942Z" level=info msg="TearDown network for sandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" successfully" Feb 13 19:51:25.000413 containerd[2011]: time="2025-02-13T19:51:25.000097862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:25.000413 containerd[2011]: time="2025-02-13T19:51:25.000189290Z" level=info msg="RemovePodSandbox \"7b8fdebad4c9efc32f8706873e4d9e94f979fce4955a09a07e3e062b263f3cf4\" returns successfully" Feb 13 19:51:25.002286 containerd[2011]: time="2025-02-13T19:51:25.002179394Z" level=info msg="StopPodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\"" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.072 [WARNING][4196] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"829d8796-b4ee-4162-b0e9-a58e535f19b4", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4", Pod:"nginx-deployment-8587fbcb89-9gxc6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali302b67e54db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.073 [INFO][4196] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.073 [INFO][4196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" iface="eth0" netns="" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.073 [INFO][4196] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.073 [INFO][4196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.113 [INFO][4202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.113 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.113 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.125 [WARNING][4202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.126 [INFO][4202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.128 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:25.133347 containerd[2011]: 2025-02-13 19:51:25.131 [INFO][4196] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.135748 containerd[2011]: time="2025-02-13T19:51:25.133458555Z" level=info msg="TearDown network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" successfully" Feb 13 19:51:25.135748 containerd[2011]: time="2025-02-13T19:51:25.133550751Z" level=info msg="StopPodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" returns successfully" Feb 13 19:51:25.135748 containerd[2011]: time="2025-02-13T19:51:25.134896935Z" level=info msg="RemovePodSandbox for \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\"" Feb 13 19:51:25.135748 containerd[2011]: time="2025-02-13T19:51:25.134952003Z" level=info msg="Forcibly stopping sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\"" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.201 [WARNING][4220] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"829d8796-b4ee-4162-b0e9-a58e535f19b4", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"7fcfea0d54adb09a3ff4b6e70778ef90e4a87050eca1c6ccc5cb38e7e4997fd4", Pod:"nginx-deployment-8587fbcb89-9gxc6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali302b67e54db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.201 [INFO][4220] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.201 [INFO][4220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" iface="eth0" netns="" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.201 [INFO][4220] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.201 [INFO][4220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.240 [INFO][4227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.240 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.240 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.253 [WARNING][4227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.253 [INFO][4227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" HandleID="k8s-pod-network.7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Workload="172.31.30.175-k8s-nginx--deployment--8587fbcb89--9gxc6-eth0" Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.256 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:25.261186 containerd[2011]: 2025-02-13 19:51:25.258 [INFO][4220] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00" Feb 13 19:51:25.263462 containerd[2011]: time="2025-02-13T19:51:25.262062892Z" level=info msg="TearDown network for sandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" successfully" Feb 13 19:51:25.268050 containerd[2011]: time="2025-02-13T19:51:25.267986896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:25.268314 containerd[2011]: time="2025-02-13T19:51:25.268276876Z" level=info msg="RemovePodSandbox \"7c23d5856258af7d065b97ca9fb4bd8590febe959197f17fc4eb76dc86bd1a00\" returns successfully" Feb 13 19:51:25.715792 kubelet[2468]: E0213 19:51:25.715726 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:26.716198 kubelet[2468]: E0213 19:51:26.716133 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:27.716407 kubelet[2468]: E0213 19:51:27.716326 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:28.717164 kubelet[2468]: E0213 19:51:28.717105 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:29.717921 kubelet[2468]: E0213 19:51:29.717864 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:30.718948 kubelet[2468]: E0213 19:51:30.718884 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:31.719995 kubelet[2468]: E0213 19:51:31.719928 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:32.720139 kubelet[2468]: E0213 19:51:32.720070 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:33.720980 kubelet[2468]: E0213 19:51:33.720910 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:34.722084 kubelet[2468]: E0213 19:51:34.722005 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:35.723231 kubelet[2468]: E0213 19:51:35.723141 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:36.723796 kubelet[2468]: E0213 19:51:36.723692 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:37.724869 kubelet[2468]: E0213 19:51:37.724797 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:38.726022 kubelet[2468]: E0213 19:51:38.725953 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:39.726491 kubelet[2468]: E0213 19:51:39.726369 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:40.727715 kubelet[2468]: E0213 19:51:40.727582 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:41.618432 kubelet[2468]: I0213 19:51:41.618204 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=26.049829449 podStartE2EDuration="31.618183861s" podCreationTimestamp="2025-02-13 19:51:10 +0000 UTC" firstStartedPulling="2025-02-13 19:51:11.418962639 +0000 UTC m=+49.216685802" lastFinishedPulling="2025-02-13 19:51:16.987317063 +0000 UTC m=+54.785040214" observedRunningTime="2025-02-13 19:51:17.169781408 +0000 UTC m=+54.967504583" watchObservedRunningTime="2025-02-13 19:51:41.618183861 +0000 UTC m=+79.415907024" Feb 13 19:51:41.629931 systemd[1]: Created slice kubepods-besteffort-pod581db4f7_4bef_4943_b51f_4e1dab61dc8d.slice - libcontainer container kubepods-besteffort-pod581db4f7_4bef_4943_b51f_4e1dab61dc8d.slice. Feb 13 19:51:41.724953 kubelet[2468]: I0213 19:51:41.724881 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81717fda-b67b-4bb2-bca8-218356ab6829\" (UniqueName: \"kubernetes.io/nfs/581db4f7-4bef-4943-b51f-4e1dab61dc8d-pvc-81717fda-b67b-4bb2-bca8-218356ab6829\") pod \"test-pod-1\" (UID: \"581db4f7-4bef-4943-b51f-4e1dab61dc8d\") " pod="default/test-pod-1" Feb 13 19:51:41.724953 kubelet[2468]: I0213 19:51:41.724956 2468 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnd5\" (UniqueName: \"kubernetes.io/projected/581db4f7-4bef-4943-b51f-4e1dab61dc8d-kube-api-access-wmnd5\") pod \"test-pod-1\" (UID: \"581db4f7-4bef-4943-b51f-4e1dab61dc8d\") " pod="default/test-pod-1" Feb 13 19:51:41.728454 kubelet[2468]: E0213 19:51:41.728357 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:41.862440 kernel: FS-Cache: Loaded Feb 13 19:51:41.905644 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:51:41.905788 kernel: RPC: Registered udp transport module. Feb 13 19:51:41.905834 kernel: RPC: Registered tcp transport module. Feb 13 19:51:41.907489 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:51:41.908854 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:51:42.247979 kernel: NFS: Registering the id_resolver key type Feb 13 19:51:42.248189 kernel: Key type id_resolver registered Feb 13 19:51:42.248248 kernel: Key type id_legacy registered Feb 13 19:51:42.285842 nfsidmap[4294]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:51:42.291989 nfsidmap[4295]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:51:42.537435 containerd[2011]: time="2025-02-13T19:51:42.536597782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:581db4f7-4bef-4943-b51f-4e1dab61dc8d,Namespace:default,Attempt:0,}" Feb 13 19:51:42.729160 kubelet[2468]: E0213 19:51:42.729079 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:42.765458 (udev-worker)[4286]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:42.768529 systemd-networkd[1842]: cali5ec59c6bf6e: Link UP Feb 13 19:51:42.771486 systemd-networkd[1842]: cali5ec59c6bf6e: Gained carrier Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.637 [INFO][4296] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.175-k8s-test--pod--1-eth0 default 581db4f7-4bef-4943-b51f-4e1dab61dc8d 1243 0 2025-02-13 19:51:11 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.30.175 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.637 [INFO][4296] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.690 [INFO][4307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" HandleID="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Workload="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.707 [INFO][4307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" HandleID="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Workload="172.31.30.175-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038b4f0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.175", "pod":"test-pod-1", "timestamp":"2025-02-13 19:51:42.690154978 +0000 UTC"}, Hostname:"172.31.30.175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.707 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.707 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.707 [INFO][4307] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.175' Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.711 [INFO][4307] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.717 [INFO][4307] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.725 [INFO][4307] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.730 [INFO][4307] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.736 [INFO][4307] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.736 [INFO][4307] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.738 [INFO][4307] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455 Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.744 [INFO][4307] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.758 [INFO][4307] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.4/26] block=192.168.59.0/26 handle="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.758 [INFO][4307] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.4/26] handle="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" host="172.31.30.175" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.758 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.758 [INFO][4307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.4/26] IPv6=[] ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" HandleID="k8s-pod-network.67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Workload="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.795900 containerd[2011]: 2025-02-13 19:51:42.761 [INFO][4296] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"581db4f7-4bef-4943-b51f-4e1dab61dc8d", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:42.799546 containerd[2011]: 2025-02-13 19:51:42.761 [INFO][4296] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.4/32] ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.799546 containerd[2011]: 2025-02-13 19:51:42.761 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.799546 containerd[2011]: 2025-02-13 19:51:42.770 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.799546 containerd[2011]: 2025-02-13 19:51:42.772 [INFO][4296] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.175-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"581db4f7-4bef-4943-b51f-4e1dab61dc8d", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.175", ContainerID:"67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"be:29:64:5b:1d:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:42.799546 containerd[2011]: 2025-02-13 19:51:42.789 [INFO][4296] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.175-k8s-test--pod--1-eth0" Feb 13 19:51:42.898320 containerd[2011]: time="2025-02-13T19:51:42.897261767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:42.898320 containerd[2011]: time="2025-02-13T19:51:42.897599771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:42.898320 containerd[2011]: time="2025-02-13T19:51:42.897632675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:42.899694 containerd[2011]: time="2025-02-13T19:51:42.899521691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:42.939441 systemd[1]: run-containerd-runc-k8s.io-67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455-runc.AkFxdd.mount: Deactivated successfully. Feb 13 19:51:42.952733 systemd[1]: Started cri-containerd-67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455.scope - libcontainer container 67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455. Feb 13 19:51:43.012180 containerd[2011]: time="2025-02-13T19:51:43.012116732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:581db4f7-4bef-4943-b51f-4e1dab61dc8d,Namespace:default,Attempt:0,} returns sandbox id \"67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455\"" Feb 13 19:51:43.014959 containerd[2011]: time="2025-02-13T19:51:43.014867756Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:51:43.350450 containerd[2011]: time="2025-02-13T19:51:43.349862158Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:43.352002 containerd[2011]: time="2025-02-13T19:51:43.351920218Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:51:43.358147 containerd[2011]: time="2025-02-13T19:51:43.357955462Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 342.994682ms" Feb 13 19:51:43.358147 containerd[2011]: time="2025-02-13T19:51:43.358028194Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:51:43.361436 containerd[2011]: time="2025-02-13T19:51:43.361357882Z" level=info msg="CreateContainer within sandbox \"67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:51:43.390048 containerd[2011]: time="2025-02-13T19:51:43.389985190Z" level=info msg="CreateContainer within sandbox \"67b32bc0b0c29314a73ff2666c34b8cd8537764786026c8d708a77898cd52455\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bfd0215dd4af372257d864ee91e56ebb99c6573ad56203074eba6629fa249015\"" Feb 13 19:51:43.390980 containerd[2011]: time="2025-02-13T19:51:43.390912658Z" level=info msg="StartContainer for \"bfd0215dd4af372257d864ee91e56ebb99c6573ad56203074eba6629fa249015\"" Feb 13 19:51:43.435778 systemd[1]: Started cri-containerd-bfd0215dd4af372257d864ee91e56ebb99c6573ad56203074eba6629fa249015.scope - libcontainer container bfd0215dd4af372257d864ee91e56ebb99c6573ad56203074eba6629fa249015. Feb 13 19:51:43.485012 containerd[2011]: time="2025-02-13T19:51:43.483783526Z" level=info msg="StartContainer for \"bfd0215dd4af372257d864ee91e56ebb99c6573ad56203074eba6629fa249015\" returns successfully" Feb 13 19:51:43.729753 kubelet[2468]: E0213 19:51:43.729581 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:44.249170 kubelet[2468]: I0213 19:51:44.248771 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.903516924 podStartE2EDuration="33.248750194s" podCreationTimestamp="2025-02-13 19:51:11 +0000 UTC" firstStartedPulling="2025-02-13 19:51:43.014112068 +0000 UTC m=+80.811835231" lastFinishedPulling="2025-02-13 19:51:43.35934535 +0000 UTC m=+81.157068501" observedRunningTime="2025-02-13 19:51:44.248628166 +0000 UTC m=+82.046351353" watchObservedRunningTime="2025-02-13 19:51:44.248750194 +0000 UTC m=+82.046473345" Feb 13 19:51:44.605740 systemd-networkd[1842]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:51:44.668938 kubelet[2468]: E0213 19:51:44.668868 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:44.730647 kubelet[2468]: E0213 19:51:44.730588 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:45.731054 kubelet[2468]: E0213 19:51:45.730983 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:46.731652 kubelet[2468]: E0213 19:51:46.731581 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:47.600737 ntpd[1990]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:51:47.601770 ntpd[1990]: 13 Feb 19:51:47 ntpd[1990]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:51:47.731826 kubelet[2468]: E0213 19:51:47.731750 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:48.732794 kubelet[2468]: E0213 19:51:48.732732 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:49.733064 kubelet[2468]: E0213 19:51:49.733004 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:50.733641 kubelet[2468]: E0213 19:51:50.733583 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:51.734462 kubelet[2468]: E0213 19:51:51.734379 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:52.734923 kubelet[2468]: E0213 19:51:52.734848 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:53.735731 kubelet[2468]: E0213 19:51:53.735671 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:54.736609 kubelet[2468]: E0213 19:51:54.736538 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:55.736953 kubelet[2468]: E0213 19:51:55.736891 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:56.737971 kubelet[2468]: E0213 19:51:56.737912 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:57.738481 kubelet[2468]: E0213 19:51:57.738414 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:58.738934 kubelet[2468]: E0213 19:51:58.738863 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:59.739889 kubelet[2468]: E0213 19:51:59.739835 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:00.740213 kubelet[2468]: E0213 19:52:00.740148 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:01.740940 kubelet[2468]: E0213 19:52:01.740866 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:02.741997 kubelet[2468]: E0213 19:52:02.741923 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:03.742943 kubelet[2468]: E0213 19:52:03.742876 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:04.668718 kubelet[2468]: E0213 19:52:04.668661 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:04.743626 kubelet[2468]: E0213 19:52:04.743573 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:05.744011 kubelet[2468]: E0213 19:52:05.743934 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:06.421001 kubelet[2468]: E0213 19:52:06.420934 2468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:06.744197 kubelet[2468]: E0213 19:52:06.744063 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:07.744996 kubelet[2468]: E0213 19:52:07.744923 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:08.745568 kubelet[2468]: E0213 19:52:08.745500 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:09.746544 kubelet[2468]: E0213 19:52:09.746460 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:10.747011 kubelet[2468]: E0213 19:52:10.746937 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:11.747165 kubelet[2468]: E0213 19:52:11.747102 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:12.747513 kubelet[2468]: E0213 19:52:12.747441 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:13.747856 kubelet[2468]: E0213 19:52:13.747796 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:14.748627 kubelet[2468]: E0213 19:52:14.748573 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:15.749098 kubelet[2468]: E0213 19:52:15.749033 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:16.422045 kubelet[2468]: E0213 19:52:16.421976 2468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:16.750286 kubelet[2468]: E0213 19:52:16.750130 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:17.751123 kubelet[2468]: E0213 19:52:17.751060 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:18.752208 kubelet[2468]: E0213 19:52:18.752149 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:19.752872 kubelet[2468]: E0213 19:52:19.752802 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:20.753763 kubelet[2468]: E0213 19:52:20.753705 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:21.754681 kubelet[2468]: E0213 19:52:21.754615 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:22.755534 kubelet[2468]: E0213 19:52:22.755467 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:23.755910 kubelet[2468]: E0213 19:52:23.755850 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:24.668231 kubelet[2468]: E0213 19:52:24.668175 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:24.756648 kubelet[2468]: E0213 19:52:24.756583 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:25.757620 kubelet[2468]: E0213 19:52:25.757554 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:26.422307 kubelet[2468]: E0213 19:52:26.422238 2468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:26.758102 kubelet[2468]: E0213 19:52:26.757956 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:27.758473 kubelet[2468]: E0213 19:52:27.758375 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:28.758798 kubelet[2468]: E0213 19:52:28.758732 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:29.759793 kubelet[2468]: E0213 19:52:29.759735 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:30.759982 kubelet[2468]: E0213 19:52:30.759895 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:31.760641 kubelet[2468]: E0213 19:52:31.760583 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:32.760904 kubelet[2468]: E0213 19:52:32.760836 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:33.761702 kubelet[2468]: E0213 19:52:33.761632 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:34.762081 kubelet[2468]: E0213 19:52:34.762019 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:35.762785 kubelet[2468]: E0213 19:52:35.762716 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:36.423443 kubelet[2468]: E0213 19:52:36.423307 2468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:36.458472 kubelet[2468]: E0213 19:52:36.457578 2468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": unexpected EOF" Feb 13 19:52:36.458472 kubelet[2468]: I0213 19:52:36.457636 2468 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 19:52:36.763944 kubelet[2468]: E0213 19:52:36.763793 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:37.477913 kubelet[2468]: E0213 19:52:37.475094 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": dial tcp 172.31.20.175:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.30.175:60754->172.31.20.175:6443: read: connection reset by peer" interval="200ms" Feb 13 19:52:37.764812 kubelet[2468]: E0213 19:52:37.764667 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:38.765233 kubelet[2468]: E0213 19:52:38.765171 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:39.765645 kubelet[2468]: E0213 19:52:39.765579 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:40.765823 kubelet[2468]: E0213 19:52:40.765760 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:41.766967 kubelet[2468]: E0213 19:52:41.766909 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:42.767275 kubelet[2468]: E0213 19:52:42.767217 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:43.767712 kubelet[2468]: E0213 19:52:43.767654 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:44.668609 kubelet[2468]: E0213 19:52:44.668556 2468 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:44.768669 kubelet[2468]: E0213 19:52:44.768616 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:45.769767 kubelet[2468]: E0213 19:52:45.769702 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:46.770514 kubelet[2468]: E0213 19:52:46.770455 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:47.677031 kubelet[2468]: E0213 19:52:47.676959 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.30.175?timeout=10s\": dial tcp 172.31.20.175:6443: i/o timeout" interval="400ms" Feb 13 19:52:47.770925 kubelet[2468]: E0213 19:52:47.770856 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:48.771756 kubelet[2468]: E0213 19:52:48.771693 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:49.772183 kubelet[2468]: E0213 19:52:49.772127 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:50.773201 kubelet[2468]: E0213 19:52:50.773144 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:51.773563 kubelet[2468]: E0213 19:52:51.773487 2468 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"