Feb 13 19:49:58.209693 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:49:58.209746 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:49:58.209774 kernel: KASLR disabled due to lack of seed Feb 13 19:49:58.209792 kernel: efi: EFI v2.7 by EDK II Feb 13 19:49:58.209808 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:49:58.209824 kernel: ACPI: Early table checksum verification disabled Feb 13 19:49:58.209843 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:49:58.209859 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:49:58.209876 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:49:58.209893 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:49:58.212682 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:49:58.212702 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:49:58.212719 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:49:58.212735 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:49:58.212754 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:49:58.212778 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:49:58.212796 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:49:58.212812 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:49:58.212829 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:49:58.212846 kernel: printk: bootconsole [uart0] enabled Feb 13 19:49:58.212862 kernel: NUMA: Failed to initialise from firmware Feb 13 19:49:58.212879 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:49:58.212898 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:49:58.212937 kernel: Zone ranges: Feb 13 19:49:58.212955 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:49:58.212972 kernel: DMA32 empty Feb 13 19:49:58.212995 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:49:58.213012 kernel: Movable zone start for each node Feb 13 19:49:58.213028 kernel: Early memory node ranges Feb 13 19:49:58.213045 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:49:58.213061 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:49:58.213078 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:49:58.213094 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:49:58.213111 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:49:58.213127 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:49:58.213144 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:49:58.213161 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:49:58.213177 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:49:58.213198 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:49:58.213216 kernel: psci: probing for conduit method from ACPI. Feb 13 19:49:58.213240 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:49:58.213259 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:49:58.213277 kernel: psci: Trusted OS migration not required Feb 13 19:49:58.213298 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:49:58.213317 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:49:58.213336 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:49:58.213354 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:49:58.213376 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:49:58.213395 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:49:58.213413 kernel: CPU features: detected: Spectre-v2 Feb 13 19:49:58.213430 kernel: CPU features: detected: Spectre-v3a Feb 13 19:49:58.213448 kernel: CPU features: detected: Spectre-BHB Feb 13 19:49:58.213465 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:49:58.213482 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:49:58.213504 kernel: alternatives: applying boot alternatives Feb 13 19:49:58.213525 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:49:58.213544 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:49:58.213563 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:49:58.213581 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:49:58.213598 kernel: Fallback order for Node 0: 0 Feb 13 19:49:58.213617 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:49:58.213636 kernel: Policy zone: Normal Feb 13 19:49:58.213654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:49:58.213675 kernel: software IO TLB: area num 2. Feb 13 19:49:58.213695 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:49:58.213720 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:49:58.213737 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:49:58.213755 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:49:58.213774 kernel: rcu: RCU event tracing is enabled. Feb 13 19:49:58.213792 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:49:58.213810 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:49:58.213828 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:49:58.213846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:49:58.213863 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:49:58.213881 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:49:58.213976 kernel: GICv3: 96 SPIs implemented Feb 13 19:49:58.214008 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:49:58.214026 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:49:58.214044 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:49:58.214061 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:49:58.214078 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:49:58.214096 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:49:58.214115 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:49:58.214132 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:49:58.214150 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:49:58.214168 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:49:58.214186 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:49:58.214204 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:49:58.214227 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:49:58.214246 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:49:58.214264 kernel: Console: colour dummy device 80x25 Feb 13 19:49:58.214282 kernel: printk: console [tty1] enabled Feb 13 19:49:58.214300 kernel: ACPI: Core revision 20230628 Feb 13 19:49:58.214319 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:49:58.214337 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:49:58.214356 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:49:58.214374 kernel: landlock: Up and running. Feb 13 19:49:58.214398 kernel: SELinux: Initializing. Feb 13 19:49:58.214417 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:49:58.214435 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:49:58.214453 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:49:58.214471 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:49:58.214490 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:49:58.214508 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:49:58.214527 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:49:58.214545 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:49:58.214567 kernel: Remapping and enabling EFI services. Feb 13 19:49:58.214586 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:49:58.214603 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:49:58.214621 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:49:58.214639 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:49:58.214658 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:49:58.214676 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:49:58.214696 kernel: SMP: Total of 2 processors activated. Feb 13 19:49:58.214716 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:49:58.214743 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:49:58.214773 kernel: CPU features: detected: CRC32 instructions Feb 13 19:49:58.214802 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:49:58.214836 kernel: alternatives: applying system-wide alternatives Feb 13 19:49:58.214861 kernel: devtmpfs: initialized Feb 13 19:49:58.214880 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:49:58.221892 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:49:58.222141 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:49:58.222166 kernel: SMBIOS 3.0.0 present. Feb 13 19:49:58.222186 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:49:58.222219 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:49:58.222239 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:49:58.222258 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:49:58.222278 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:49:58.222297 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:49:58.222317 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Feb 13 19:49:58.222336 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:49:58.222362 kernel: cpuidle: using governor menu Feb 13 19:49:58.222381 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:49:58.222400 kernel: ASID allocator initialised with 65536 entries Feb 13 19:49:58.222419 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:49:58.222439 kernel: Serial: AMBA PL011 UART driver Feb 13 19:49:58.222459 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:49:58.222482 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:49:58.222502 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:49:58.222521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:49:58.222546 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:49:58.222565 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:49:58.222584 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:49:58.222603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:49:58.222622 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:49:58.222641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:49:58.222660 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:49:58.222679 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:49:58.222698 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:49:58.222721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:49:58.222741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:49:58.222760 kernel: ACPI: Interpreter enabled Feb 13 19:49:58.222779 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:49:58.222797 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:49:58.222816 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:49:58.228507 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:49:58.228815 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:49:58.229077 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:49:58.229385 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:49:58.229655 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:49:58.229688 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:49:58.229708 kernel: acpiphp: Slot [1] registered Feb 13 19:49:58.229728 kernel: acpiphp: Slot [2] registered Feb 13 19:49:58.229747 kernel: acpiphp: Slot [3] registered Feb 13 19:49:58.229766 kernel: acpiphp: Slot [4] registered Feb 13 19:49:58.229793 kernel: acpiphp: Slot [5] registered Feb 13 19:49:58.229813 kernel: acpiphp: Slot [6] registered Feb 13 19:49:58.229832 kernel: acpiphp: Slot [7] registered Feb 13 19:49:58.229850 kernel: acpiphp: Slot [8] registered Feb 13 19:49:58.229869 kernel: acpiphp: Slot [9] registered Feb 13 19:49:58.229888 kernel: acpiphp: Slot [10] registered Feb 13 19:49:58.229945 kernel: acpiphp: Slot [11] registered Feb 13 19:49:58.229968 kernel: acpiphp: Slot [12] registered Feb 13 19:49:58.229988 kernel: acpiphp: Slot [13] registered Feb 13 19:49:58.230007 kernel: acpiphp: Slot [14] registered Feb 13 19:49:58.230034 kernel: acpiphp: Slot [15] registered Feb 13 19:49:58.230053 kernel: acpiphp: Slot [16] registered Feb 13 19:49:58.230073 kernel: acpiphp: Slot [17] registered Feb 13 19:49:58.230091 kernel: acpiphp: Slot [18] registered Feb 13 19:49:58.230110 kernel: acpiphp: Slot [19] registered Feb 13 19:49:58.230128 kernel: acpiphp: Slot [20] registered Feb 13 19:49:58.230147 kernel: acpiphp: Slot [21] registered Feb 13 19:49:58.230165 kernel: acpiphp: Slot [22] registered Feb 13 19:49:58.230184 kernel: acpiphp: Slot [23] registered Feb 13 19:49:58.230207 kernel: acpiphp: Slot [24] registered Feb 13 19:49:58.230226 kernel: acpiphp: Slot [25] registered Feb 13 19:49:58.230244 kernel: acpiphp: Slot [26] registered Feb 13 19:49:58.230263 kernel: acpiphp: Slot [27] registered Feb 13 19:49:58.230281 kernel: acpiphp: Slot [28] registered Feb 13 19:49:58.230300 kernel: acpiphp: Slot [29] registered Feb 13 19:49:58.230318 kernel: acpiphp: Slot [30] registered Feb 13 19:49:58.230337 kernel: acpiphp: Slot [31] registered Feb 13 19:49:58.230356 kernel: PCI host bridge to bus 0000:00 Feb 13 19:49:58.230598 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:49:58.230803 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:49:58.231106 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:49:58.231301 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:49:58.231554 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:49:58.231788 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:49:58.232049 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:49:58.232317 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:49:58.232535 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:49:58.232741 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:49:58.233157 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:49:58.233382 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:49:58.233592 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:49:58.233805 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:49:58.234045 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:49:58.234251 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:49:58.234455 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:49:58.234669 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:49:58.234878 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:49:58.235168 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:49:58.235366 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:49:58.235554 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:49:58.235741 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:49:58.235769 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:49:58.235789 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:49:58.235809 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:49:58.235828 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:49:58.235847 kernel: iommu: Default domain type: Translated Feb 13 19:49:58.235867 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:49:58.235894 kernel: efivars: Registered efivars operations Feb 13 19:49:58.235940 kernel: vgaarb: loaded Feb 13 19:49:58.235960 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:49:58.235980 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:49:58.235999 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:49:58.236018 kernel: pnp: PnP ACPI init Feb 13 19:49:58.236323 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:49:58.236364 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:49:58.236396 kernel: NET: Registered PF_INET protocol family Feb 13 19:49:58.236416 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:49:58.236437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:49:58.236456 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:49:58.236476 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:49:58.236495 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:49:58.236514 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:49:58.236533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:49:58.236553 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:49:58.236577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:49:58.236597 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:49:58.236619 kernel: kvm [1]: HYP mode not available Feb 13 19:49:58.236638 kernel: Initialise system trusted keyrings Feb 13 19:49:58.236659 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:49:58.236680 kernel: Key type asymmetric registered Feb 13 19:49:58.236700 kernel: Asymmetric key parser 'x509' registered Feb 13 19:49:58.236720 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:49:58.236743 kernel: io scheduler mq-deadline registered Feb 13 19:49:58.236770 kernel: io scheduler kyber registered Feb 13 19:49:58.236791 kernel: io scheduler bfq registered Feb 13 19:49:58.237185 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:49:58.237229 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:49:58.237249 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:49:58.237269 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:49:58.237288 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:49:58.237308 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:49:58.237339 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:49:58.237578 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:49:58.237608 kernel: printk: console [ttyS0] disabled Feb 13 19:49:58.237627 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:49:58.237647 kernel: printk: console [ttyS0] enabled Feb 13 19:49:58.237665 kernel: printk: bootconsole [uart0] disabled Feb 13 19:49:58.237684 kernel: thunder_xcv, ver 1.0 Feb 13 19:49:58.237702 kernel: thunder_bgx, ver 1.0 Feb 13 19:49:58.237720 kernel: nicpf, ver 1.0 Feb 13 19:49:58.237745 kernel: nicvf, ver 1.0 Feb 13 19:49:58.238017 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:49:58.238245 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:49:57 UTC (1739476197) Feb 13 19:49:58.238274 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:49:58.238293 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:49:58.238314 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:49:58.238333 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:49:58.238352 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:49:58.238380 kernel: Segment Routing with IPv6 Feb 13 19:49:58.238399 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:49:58.238419 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:49:58.238438 kernel: Key type dns_resolver registered Feb 13 19:49:58.238456 kernel: registered taskstats version 1 Feb 13 19:49:58.238475 kernel: Loading compiled-in X.509 certificates Feb 13 19:49:58.238494 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:49:58.238512 kernel: Key type .fscrypt registered Feb 13 19:49:58.238530 kernel: Key type fscrypt-provisioning registered Feb 13 19:49:58.238553 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:49:58.238572 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:49:58.238591 kernel: ima: No architecture policies found Feb 13 19:49:58.238610 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:49:58.238629 kernel: clk: Disabling unused clocks Feb 13 19:49:58.238648 kernel: Freeing unused kernel memory: 39360K Feb 13 19:49:58.238666 kernel: Run /init as init process Feb 13 19:49:58.238685 kernel: with arguments: Feb 13 19:49:58.238703 kernel: /init Feb 13 19:49:58.238721 kernel: with environment: Feb 13 19:49:58.238744 kernel: HOME=/ Feb 13 19:49:58.238762 kernel: TERM=linux Feb 13 19:49:58.238781 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:49:58.238804 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:49:58.238829 systemd[1]: Detected virtualization amazon. Feb 13 19:49:58.238850 systemd[1]: Detected architecture arm64. Feb 13 19:49:58.238870 systemd[1]: Running in initrd. Feb 13 19:49:58.238895 systemd[1]: No hostname configured, using default hostname. Feb 13 19:49:58.241550 systemd[1]: Hostname set to . Feb 13 19:49:58.241576 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:49:58.241598 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:49:58.241620 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:58.241641 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:58.241663 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:49:58.241685 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:49:58.241719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:49:58.241741 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:49:58.241766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:49:58.241788 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:49:58.241810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:58.241831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:58.241852 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:49:58.241878 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:49:58.241928 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:49:58.241953 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:49:58.241974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:49:58.241995 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:49:58.242016 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:49:58.242037 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:49:58.242058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:58.242079 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:58.242107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:58.242128 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:49:58.242149 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:49:58.242170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:49:58.242191 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:49:58.242211 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:49:58.242233 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:49:58.242254 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:49:58.242280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:58.242366 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:49:58.242414 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:49:58.242436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:58.242464 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:49:58.242487 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:49:58.242508 systemd-journald[251]: Journal started Feb 13 19:49:58.242550 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2c2ad80a69c05a4d0d31ad6a846e41) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:49:58.223017 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:49:58.259545 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:49:58.263924 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:49:58.265420 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:49:58.273204 kernel: Bridge firewalling registered Feb 13 19:49:58.267024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:58.278821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:58.291267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:58.302015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:58.317231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:49:58.325034 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:49:58.341507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:49:58.353060 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:58.366272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:58.373977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:58.386212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:49:58.400254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:49:58.402638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:58.438137 dracut-cmdline[284]: dracut-dracut-053 Feb 13 19:49:58.443622 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:49:58.481375 systemd-resolved[285]: Positive Trust Anchors: Feb 13 19:49:58.481408 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:49:58.481472 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:49:58.577927 kernel: SCSI subsystem initialized Feb 13 19:49:58.584943 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:49:58.597949 kernel: iscsi: registered transport (tcp) Feb 13 19:49:58.619939 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:49:58.620012 kernel: QLogic iSCSI HBA Driver Feb 13 19:49:58.708958 kernel: random: crng init done Feb 13 19:49:58.709278 systemd-resolved[285]: Defaulting to hostname 'linux'. Feb 13 19:49:58.712868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:49:58.715261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:58.739998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:49:58.748181 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:49:58.786287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:49:58.786361 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:49:58.788071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:49:58.853970 kernel: raid6: neonx8 gen() 6659 MB/s Feb 13 19:49:58.870949 kernel: raid6: neonx4 gen() 6486 MB/s Feb 13 19:49:58.887949 kernel: raid6: neonx2 gen() 5408 MB/s Feb 13 19:49:58.904952 kernel: raid6: neonx1 gen() 3924 MB/s Feb 13 19:49:58.921934 kernel: raid6: int64x8 gen() 3791 MB/s Feb 13 19:49:58.938959 kernel: raid6: int64x4 gen() 3716 MB/s Feb 13 19:49:58.955956 kernel: raid6: int64x2 gen() 3577 MB/s Feb 13 19:49:58.973797 kernel: raid6: int64x1 gen() 2750 MB/s Feb 13 19:49:58.973867 kernel: raid6: using algorithm neonx8 gen() 6659 MB/s Feb 13 19:49:58.991747 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Feb 13 19:49:58.991832 kernel: raid6: using neon recovery algorithm Feb 13 19:49:58.999954 kernel: xor: measuring software checksum speed Feb 13 19:49:59.002030 kernel: 8regs : 10154 MB/sec Feb 13 19:49:59.002097 kernel: 32regs : 11901 MB/sec Feb 13 19:49:59.003214 kernel: arm64_neon : 9485 MB/sec Feb 13 19:49:59.003273 kernel: xor: using function: 32regs (11901 MB/sec) Feb 13 19:49:59.092968 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:49:59.114655 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:49:59.124228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:59.163416 systemd-udevd[469]: Using default interface naming scheme 'v255'. Feb 13 19:49:59.172251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:59.191502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:49:59.234289 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 19:49:59.296642 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:49:59.307246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:49:59.438883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:59.465099 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:49:59.510454 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:49:59.517798 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:49:59.522183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:59.526548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:49:59.535196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:49:59.582430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:49:59.632366 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:49:59.633002 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:49:59.654599 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:49:59.654865 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:49:59.655149 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1b:f2:1a:c9:31 Feb 13 19:49:59.654088 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:59.662023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:49:59.662167 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:59.666933 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:59.671012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:49:59.671144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:59.674094 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:59.701949 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:49:59.702249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:59.707357 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:49:59.717944 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:49:59.725263 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:49:59.725332 kernel: GPT:9289727 != 16777215 Feb 13 19:49:59.725359 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:49:59.727731 kernel: GPT:9289727 != 16777215 Feb 13 19:49:59.727769 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:49:59.727796 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:49:59.730662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:59.741244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:59.772965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:59.836946 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (523) Feb 13 19:49:59.863515 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (525) Feb 13 19:49:59.927187 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:49:59.959662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:49:59.977889 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:50:00.005237 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:00.010828 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:00.020266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:50:00.035931 disk-uuid[659]: Primary Header is updated. Feb 13 19:50:00.035931 disk-uuid[659]: Secondary Entries is updated. Feb 13 19:50:00.035931 disk-uuid[659]: Secondary Header is updated. Feb 13 19:50:00.046000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:00.056949 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:01.061974 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:01.064474 disk-uuid[660]: The operation has completed successfully. Feb 13 19:50:01.242314 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:50:01.242946 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:50:01.307162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:50:01.315672 sh[919]: Success Feb 13 19:50:01.340958 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:50:01.455057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:50:01.473126 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:50:01.480001 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:50:01.517200 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:50:01.517276 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:01.517318 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:50:01.518888 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:50:01.520161 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:50:01.640945 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:50:01.669082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:50:01.670579 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:50:01.692348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:50:01.698215 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:50:01.728440 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:01.728504 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:01.728542 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:01.736993 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:01.754990 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:50:01.758209 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:01.769744 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:50:01.782348 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:50:01.886787 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:01.913311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:01.962623 systemd-networkd[1112]: lo: Link UP Feb 13 19:50:01.962983 systemd-networkd[1112]: lo: Gained carrier Feb 13 19:50:01.967799 systemd-networkd[1112]: Enumeration completed Feb 13 19:50:01.968358 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:01.969855 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:01.969862 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:01.980169 systemd[1]: Reached target network.target - Network. Feb 13 19:50:01.982878 systemd-networkd[1112]: eth0: Link UP Feb 13 19:50:01.982885 systemd-networkd[1112]: eth0: Gained carrier Feb 13 19:50:01.982924 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:02.004048 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.17.39/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:02.213030 ignition[1021]: Ignition 2.19.0 Feb 13 19:50:02.213053 ignition[1021]: Stage: fetch-offline Feb 13 19:50:02.213632 ignition[1021]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:02.213657 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:02.215167 ignition[1021]: Ignition finished successfully Feb 13 19:50:02.224358 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:02.233195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:50:02.265437 ignition[1123]: Ignition 2.19.0 Feb 13 19:50:02.265459 ignition[1123]: Stage: fetch Feb 13 19:50:02.266126 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:02.266152 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:02.266573 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:02.289037 ignition[1123]: PUT result: OK Feb 13 19:50:02.292343 ignition[1123]: parsed url from cmdline: "" Feb 13 19:50:02.292364 ignition[1123]: no config URL provided Feb 13 19:50:02.292380 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:50:02.292431 ignition[1123]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:50:02.292464 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:02.296346 ignition[1123]: PUT result: OK Feb 13 19:50:02.298134 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:50:02.304404 ignition[1123]: GET result: OK Feb 13 19:50:02.304512 ignition[1123]: parsing config with SHA512: 23ba1f06a2c17fc0ea144ae632e9791ee310887391db7ac19928a74de7d0742b53c4c9118046dba04bd012b0fe4e4781bf19dbca02e9a6f1393826df45d559df Feb 13 19:50:02.312798 unknown[1123]: fetched base config from "system" Feb 13 19:50:02.312832 unknown[1123]: fetched base config from "system" Feb 13 19:50:02.314310 ignition[1123]: fetch: fetch complete Feb 13 19:50:02.312846 unknown[1123]: fetched user config from "aws" Feb 13 19:50:02.314322 ignition[1123]: fetch: fetch passed Feb 13 19:50:02.319792 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:50:02.314410 ignition[1123]: Ignition finished successfully Feb 13 19:50:02.341432 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:50:02.376076 ignition[1129]: Ignition 2.19.0 Feb 13 19:50:02.376096 ignition[1129]: Stage: kargs Feb 13 19:50:02.377351 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:02.377379 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:02.377585 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:02.380865 ignition[1129]: PUT result: OK Feb 13 19:50:02.389125 ignition[1129]: kargs: kargs passed Feb 13 19:50:02.389239 ignition[1129]: Ignition finished successfully Feb 13 19:50:02.395306 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:50:02.404270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:50:02.438283 ignition[1135]: Ignition 2.19.0 Feb 13 19:50:02.438792 ignition[1135]: Stage: disks Feb 13 19:50:02.439459 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:02.439484 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:02.439665 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:02.443948 ignition[1135]: PUT result: OK Feb 13 19:50:02.453121 ignition[1135]: disks: disks passed Feb 13 19:50:02.453796 ignition[1135]: Ignition finished successfully Feb 13 19:50:02.458307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:50:02.463146 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:02.466441 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:02.472635 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:02.474651 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:02.478791 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:02.495376 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:50:02.538257 systemd-fsck[1143]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:50:02.543983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:50:02.557634 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:50:02.641973 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:50:02.643112 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:50:02.647086 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:02.662067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:02.678112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:50:02.683570 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:50:02.683651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:50:02.683701 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:02.688391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:50:02.694086 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:50:02.730682 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1162) Feb 13 19:50:02.730749 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:02.730776 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:02.733362 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:02.744955 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:02.747722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:03.063568 initrd-setup-root[1186]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:50:03.084215 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:50:03.105754 initrd-setup-root[1200]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:50:03.114812 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:50:03.434256 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:03.449595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:50:03.456202 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:50:03.474223 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:50:03.476081 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:03.525870 ignition[1275]: INFO : Ignition 2.19.0 Feb 13 19:50:03.525870 ignition[1275]: INFO : Stage: mount Feb 13 19:50:03.529724 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:03.529724 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:03.529724 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:03.534128 ignition[1275]: INFO : PUT result: OK Feb 13 19:50:03.529736 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:50:03.545290 ignition[1275]: INFO : mount: mount passed Feb 13 19:50:03.545290 ignition[1275]: INFO : Ignition finished successfully Feb 13 19:50:03.550325 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:50:03.563099 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:50:03.651391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:03.683973 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1287) Feb 13 19:50:03.687981 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:03.688062 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:03.689272 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:03.694937 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:03.699129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:03.707240 systemd-networkd[1112]: eth0: Gained IPv6LL Feb 13 19:50:03.747377 ignition[1304]: INFO : Ignition 2.19.0 Feb 13 19:50:03.750481 ignition[1304]: INFO : Stage: files Feb 13 19:50:03.750481 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:03.750481 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:03.750481 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:03.759823 ignition[1304]: INFO : PUT result: OK Feb 13 19:50:03.763216 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:50:03.766136 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:50:03.766136 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:50:03.794624 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:50:03.797487 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:50:03.800893 unknown[1304]: wrote ssh authorized keys file for user: core Feb 13 19:50:03.803239 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:50:03.817201 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:50:03.820576 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:50:03.820576 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:03.820576 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:03.820576 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:03.820576 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:03.820576 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:03.842488 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:03.842488 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:03.842488 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:50:04.334523 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 19:50:04.741965 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:04.741965 ignition[1304]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 19:50:04.748665 ignition[1304]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:50:04.748665 ignition[1304]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:50:04.748665 ignition[1304]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 19:50:04.748665 ignition[1304]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:04.748665 ignition[1304]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:04.748665 ignition[1304]: INFO : files: files passed Feb 13 19:50:04.748665 ignition[1304]: INFO : Ignition finished successfully Feb 13 19:50:04.757241 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:50:04.779589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:50:04.789215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:50:04.797764 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:50:04.798114 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:50:04.823731 initrd-setup-root-after-ignition[1332]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:04.828109 initrd-setup-root-after-ignition[1336]: grep: Feb 13 19:50:04.828109 initrd-setup-root-after-ignition[1332]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:04.833072 initrd-setup-root-after-ignition[1336]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:04.840762 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:04.846127 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:50:04.864372 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:50:04.923708 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:50:04.926198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:50:04.932859 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:50:04.937062 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:50:04.941072 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:50:04.952372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:50:04.988659 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:05.006238 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:50:05.032896 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:05.037853 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:05.042742 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:50:05.045304 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:50:05.045727 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:05.053700 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:50:05.057401 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:50:05.062105 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:50:05.064477 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:05.067464 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:05.076933 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:50:05.079965 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:05.086926 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:50:05.089838 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:50:05.093852 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:50:05.096993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:50:05.097272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:05.102883 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:05.108706 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:05.111081 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:50:05.115155 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:05.118800 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:50:05.119238 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:05.129926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:50:05.130541 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:05.137458 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:50:05.137979 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:50:05.154305 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:50:05.161408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:50:05.165970 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:50:05.168043 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:05.178795 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:50:05.179103 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:05.199035 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:50:05.203503 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:50:05.217959 ignition[1356]: INFO : Ignition 2.19.0 Feb 13 19:50:05.217959 ignition[1356]: INFO : Stage: umount Feb 13 19:50:05.225953 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:05.225953 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:05.225953 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:05.225953 ignition[1356]: INFO : PUT result: OK Feb 13 19:50:05.242083 ignition[1356]: INFO : umount: umount passed Feb 13 19:50:05.234732 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:50:05.250794 ignition[1356]: INFO : Ignition finished successfully Feb 13 19:50:05.242880 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:50:05.243263 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:50:05.254426 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:50:05.254645 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:50:05.264652 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:50:05.264847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:50:05.272967 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:50:05.273607 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:50:05.275192 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:50:05.275303 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:50:05.276501 systemd[1]: Stopped target network.target - Network. Feb 13 19:50:05.289141 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:50:05.289462 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:05.295830 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:50:05.299049 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:50:05.303227 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:05.305874 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:50:05.307675 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:50:05.309590 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:50:05.309683 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:05.311620 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:50:05.311714 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:05.313690 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:50:05.313806 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:50:05.315757 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:50:05.315868 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:05.317977 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:50:05.318093 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:05.320499 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:50:05.327808 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:50:05.331150 systemd-networkd[1112]: eth0: DHCPv6 lease lost Feb 13 19:50:05.341769 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:50:05.342120 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:50:05.361571 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:50:05.364427 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:50:05.377351 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:50:05.377507 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:05.396121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:50:05.400283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:50:05.400424 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:05.405261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:50:05.405379 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:05.419519 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:50:05.419850 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:05.426464 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:50:05.426591 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:05.432674 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:05.466373 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:50:05.466813 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:50:05.471634 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:50:05.472100 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:05.478632 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:50:05.479151 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:05.482409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:50:05.482489 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:05.482892 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:50:05.483174 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:05.488524 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:50:05.488652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:05.490227 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:05.490367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:05.521435 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:50:05.524137 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:50:05.524307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:05.529785 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:50:05.531209 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:05.534176 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:50:05.534305 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:05.550414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:05.550548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:05.565140 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:50:05.565895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:50:05.574314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:50:05.586332 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:50:05.628762 systemd[1]: Switching root. Feb 13 19:50:05.666012 systemd-journald[251]: Journal stopped Feb 13 19:50:08.417834 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:50:08.418019 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:50:08.418080 kernel: SELinux: policy capability open_perms=1 Feb 13 19:50:08.418128 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:50:08.418170 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:50:08.418204 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:50:08.418234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:50:08.418268 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:50:08.418297 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:50:08.418329 kernel: audit: type=1403 audit(1739476206.446:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:50:08.418376 systemd[1]: Successfully loaded SELinux policy in 60.907ms. Feb 13 19:50:08.418423 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.085ms. Feb 13 19:50:08.418465 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:08.418507 systemd[1]: Detected virtualization amazon. Feb 13 19:50:08.418550 systemd[1]: Detected architecture arm64. Feb 13 19:50:08.418588 systemd[1]: Detected first boot. Feb 13 19:50:08.418631 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:08.418670 zram_generator::config[1415]: No configuration found. Feb 13 19:50:08.418713 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:50:08.418752 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:50:08.418789 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:50:08.418829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:50:08.418859 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:50:08.418891 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:50:08.421017 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:50:08.421059 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:50:08.421095 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:50:08.421139 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:50:08.421175 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:50:08.421209 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:08.421241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:08.421276 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:50:08.421310 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:50:08.421350 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:50:08.421385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:08.421424 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:50:08.421456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:08.421486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:50:08.421519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:08.421552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:08.421584 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:08.421615 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:08.421648 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:50:08.421685 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:50:08.421717 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:50:08.421762 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:50:08.421793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:08.421824 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:08.421856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:08.421887 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:50:08.421960 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:50:08.421996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:50:08.422030 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:50:08.422071 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:50:08.422111 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:50:08.422147 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:50:08.422179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:50:08.422214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:08.422249 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:08.422282 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:50:08.422317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:08.422358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:08.422391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:08.422426 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:50:08.422467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:08.422502 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:50:08.422550 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:50:08.422585 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:50:08.422617 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:08.422647 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:08.422685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:50:08.422718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:50:08.422748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:08.422782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:50:08.422814 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:50:08.422846 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:50:08.422876 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:50:08.425655 kernel: loop: module loaded Feb 13 19:50:08.425727 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:50:08.425779 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:50:08.425814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:08.425845 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:50:08.425877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:50:08.425987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:08.426026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:08.427626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:08.427660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:08.427763 systemd-journald[1514]: Collecting audit messages is disabled. Feb 13 19:50:08.427831 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:08.427863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:08.427895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:08.428003 systemd-journald[1514]: Journal started Feb 13 19:50:08.428053 systemd-journald[1514]: Runtime Journal (/run/log/journal/ec2c2ad80a69c05a4d0d31ad6a846e41) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:08.447977 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:08.455058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:50:08.461117 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:50:08.472790 kernel: fuse: init (API version 7.39) Feb 13 19:50:08.480951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:50:08.481417 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:50:08.495542 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:50:08.516719 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:50:08.530100 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:50:08.536621 kernel: ACPI: bus type drm_connector registered Feb 13 19:50:08.548320 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:50:08.553129 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:50:08.574306 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:50:08.588244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:50:08.592121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:08.597228 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:50:08.604239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:08.618235 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:08.634964 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:08.642289 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:08.658495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:08.661634 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:50:08.674241 systemd-journald[1514]: Time spent on flushing to /var/log/journal/ec2c2ad80a69c05a4d0d31ad6a846e41 is 81.213ms for 879 entries. Feb 13 19:50:08.674241 systemd-journald[1514]: System Journal (/var/log/journal/ec2c2ad80a69c05a4d0d31ad6a846e41) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:50:08.774405 systemd-journald[1514]: Received client request to flush runtime journal. Feb 13 19:50:08.676750 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:50:08.691442 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:50:08.695614 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:50:08.710658 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:08.725406 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:50:08.751831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:08.782455 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:50:08.790279 udevadm[1576]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:50:08.808351 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Feb 13 19:50:08.808883 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Feb 13 19:50:08.820766 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:08.832284 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:50:08.897692 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:50:08.910285 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:08.945068 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Feb 13 19:50:08.945722 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Feb 13 19:50:08.958125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:09.695566 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:50:09.706358 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:09.766833 systemd-udevd[1596]: Using default interface naming scheme 'v255'. Feb 13 19:50:09.860740 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:09.874269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:09.928208 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:50:09.991639 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:50:10.070044 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:50:10.088775 (udev-worker)[1607]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:10.256756 systemd-networkd[1601]: lo: Link UP Feb 13 19:50:10.256774 systemd-networkd[1601]: lo: Gained carrier Feb 13 19:50:10.261153 systemd-networkd[1601]: Enumeration completed Feb 13 19:50:10.261592 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:10.263946 systemd-networkd[1601]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:10.264101 systemd-networkd[1601]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:10.266410 systemd-networkd[1601]: eth0: Link UP Feb 13 19:50:10.267486 systemd-networkd[1601]: eth0: Gained carrier Feb 13 19:50:10.269536 systemd-networkd[1601]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:10.273237 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:50:10.284411 systemd-networkd[1601]: eth0: DHCPv4 address 172.31.17.39/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:10.290124 systemd-networkd[1601]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:10.357078 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1607) Feb 13 19:50:10.395177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:10.584811 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:50:10.615635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:10.618845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:10.628303 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:50:10.675936 lvm[1725]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:10.719651 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:50:10.722669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:10.735265 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:50:10.754774 lvm[1728]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:10.798157 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:50:10.801294 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:10.804068 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:50:10.804314 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:10.806601 systemd[1]: Reached target machines.target - Containers. Feb 13 19:50:10.811099 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:50:10.821216 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:50:10.834965 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:50:10.837205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:10.842266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:50:10.856319 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:50:10.866297 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:50:10.871823 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:50:10.901526 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:50:10.909027 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 19:50:10.902890 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:50:10.919312 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:50:10.995959 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:50:11.042038 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 19:50:11.145966 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 19:50:11.232963 kernel: loop3: detected capacity change from 0 to 52536 Feb 13 19:50:11.283958 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 19:50:11.328114 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 19:50:11.340029 kernel: loop6: detected capacity change from 0 to 114328 Feb 13 19:50:11.356969 kernel: loop7: detected capacity change from 0 to 52536 Feb 13 19:50:11.374093 (sd-merge)[1750]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:50:11.375242 (sd-merge)[1750]: Merged extensions into '/usr'. Feb 13 19:50:11.382544 systemd[1]: Reloading requested from client PID 1736 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:50:11.382819 systemd[1]: Reloading... Feb 13 19:50:11.510944 zram_generator::config[1781]: No configuration found. Feb 13 19:50:11.768097 systemd-networkd[1601]: eth0: Gained IPv6LL Feb 13 19:50:11.812406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:11.964293 systemd[1]: Reloading finished in 580 ms. Feb 13 19:50:11.990034 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:50:11.993505 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:50:12.016191 systemd[1]: Starting ensure-sysext.service... Feb 13 19:50:12.024249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:12.030895 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:50:12.032999 systemd[1]: Reloading... Feb 13 19:50:12.095890 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:50:12.097774 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:50:12.099889 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:50:12.100703 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Feb 13 19:50:12.101107 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Feb 13 19:50:12.111023 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:12.111046 systemd-tmpfiles[1838]: Skipping /boot Feb 13 19:50:12.139366 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:12.139572 systemd-tmpfiles[1838]: Skipping /boot Feb 13 19:50:12.225997 zram_generator::config[1869]: No configuration found. Feb 13 19:50:12.359696 ldconfig[1732]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:50:12.486972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:12.631174 systemd[1]: Reloading finished in 597 ms. Feb 13 19:50:12.656363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:50:12.670315 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:12.689365 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:12.697251 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:50:12.708363 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:50:12.724972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:12.729610 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:50:12.757222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:12.764581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:12.784346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:12.808939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:12.812227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:12.833471 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:12.833861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:12.846150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:12.864642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:12.868276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:12.869842 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:50:12.876313 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:50:12.881988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:12.882415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:12.890067 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:12.891730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:12.914954 augenrules[1960]: No rules Feb 13 19:50:12.919204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:12.935165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:12.936488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:12.943815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:12.952566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:12.968439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:12.989345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:12.992825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:12.993293 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:50:13.009177 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:50:13.019391 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:13.021842 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:13.027637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:13.029051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:13.036034 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:13.036481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:13.051747 systemd[1]: Finished ensure-sysext.service. Feb 13 19:50:13.065395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:13.065533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:13.077728 systemd-resolved[1933]: Positive Trust Anchors: Feb 13 19:50:13.077780 systemd-resolved[1933]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:13.077846 systemd-resolved[1933]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:13.083872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:50:13.092157 systemd-resolved[1933]: Defaulting to hostname 'linux'. Feb 13 19:50:13.096382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:13.098641 systemd[1]: Reached target network.target - Network. Feb 13 19:50:13.100448 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:50:13.102571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:13.138243 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:50:13.142169 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:50:13.142286 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:13.145114 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:50:13.147821 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:50:13.151028 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:50:13.153292 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:50:13.155640 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:50:13.158006 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:50:13.158060 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:13.159750 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:13.163013 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:50:13.168058 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:50:13.173534 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:50:13.183060 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:50:13.185317 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:13.188084 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:13.190624 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:50:13.190726 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:13.190775 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:13.202115 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:50:13.208311 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:50:13.225445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:50:13.232106 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:50:13.240365 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:50:13.245095 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:50:13.258167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:13.267126 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:50:13.285166 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:50:13.296019 jq[1996]: false Feb 13 19:50:13.293167 dbus-daemon[1995]: [system] SELinux support is enabled Feb 13 19:50:13.303756 dbus-daemon[1995]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1601 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:13.319377 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:50:13.327234 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:50:13.339236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:50:13.364225 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:50:13.370897 extend-filesystems[1997]: Found loop4 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found loop5 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found loop6 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found loop7 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p1 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p2 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p3 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found usr Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p4 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p6 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p7 Feb 13 19:50:13.375280 extend-filesystems[1997]: Found nvme0n1p9 Feb 13 19:50:13.375280 extend-filesystems[1997]: Checking size of /dev/nvme0n1p9 Feb 13 19:50:13.409200 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:50:13.414509 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:50:13.432282 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:50:13.448091 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:50:13.453211 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:50:13.484291 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:50:13.484842 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:50:13.502400 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:50:13.502943 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:50:13.527851 jq[2024]: true Feb 13 19:50:13.530893 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: ---------------------------------------------------- Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: corporation. Support and training for ntp-4 are Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: available at https://www.nwtime.org/support Feb 13 19:50:13.539043 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: ---------------------------------------------------- Feb 13 19:50:13.536938 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:13.531459 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:50:13.537011 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:13.537032 ntpd[2000]: ---------------------------------------------------- Feb 13 19:50:13.537053 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:13.537071 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:13.537090 ntpd[2000]: corporation. Support and training for ntp-4 are Feb 13 19:50:13.537109 ntpd[2000]: available at https://www.nwtime.org/support Feb 13 19:50:13.537127 ntpd[2000]: ---------------------------------------------------- Feb 13 19:50:13.546729 ntpd[2000]: proto: precision = 0.096 usec (-23) Feb 13 19:50:13.564068 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: proto: precision = 0.096 usec (-23) Feb 13 19:50:13.564068 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: basedate set to 2025-02-01 Feb 13 19:50:13.564068 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:13.554623 ntpd[2000]: basedate set to 2025-02-01 Feb 13 19:50:13.554656 ntpd[2000]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:13.566345 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:13.567223 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:13.567223 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:13.567223 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:13.567223 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listen normally on 3 eth0 172.31.17.39:123 Feb 13 19:50:13.567223 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:13.566452 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:13.566712 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:13.566773 ntpd[2000]: Listen normally on 3 eth0 172.31.17.39:123 Feb 13 19:50:13.566840 ntpd[2000]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:13.569995 ntpd[2000]: Listen normally on 5 eth0 [fe80::41b:f2ff:fe1a:c931%2]:123 Feb 13 19:50:13.570469 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listen normally on 5 eth0 [fe80::41b:f2ff:fe1a:c931%2]:123 Feb 13 19:50:13.570469 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: Listening on routing socket on fd #22 for interface updates Feb 13 19:50:13.570121 ntpd[2000]: Listening on routing socket on fd #22 for interface updates Feb 13 19:50:13.588226 update_engine[2023]: I20250213 19:50:13.587972 2023 main.cc:92] Flatcar Update Engine starting Feb 13 19:50:13.627746 update_engine[2023]: I20250213 19:50:13.601186 2023 update_check_scheduler.cc:74] Next update check in 8m45s Feb 13 19:50:13.627855 extend-filesystems[1997]: Resized partition /dev/nvme0n1p9 Feb 13 19:50:13.631160 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:13.631160 ntpd[2000]: 13 Feb 19:50:13 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:13.601757 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:13.601813 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:13.638056 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:50:13.646160 extend-filesystems[2047]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:50:13.664634 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:50:13.666266 coreos-metadata[1994]: Feb 13 19:50:13.666 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:13.666266 coreos-metadata[1994]: Feb 13 19:50:13.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:50:13.674086 coreos-metadata[1994]: Feb 13 19:50:13.672 INFO Fetch successful Feb 13 19:50:13.674086 coreos-metadata[1994]: Feb 13 19:50:13.673 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:50:13.695366 coreos-metadata[1994]: Feb 13 19:50:13.684 INFO Fetch successful Feb 13 19:50:13.695366 coreos-metadata[1994]: Feb 13 19:50:13.685 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:50:13.695366 coreos-metadata[1994]: Feb 13 19:50:13.693 INFO Fetch successful Feb 13 19:50:13.695366 coreos-metadata[1994]: Feb 13 19:50:13.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:50:13.687221 dbus-daemon[1995]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:50:13.699008 jq[2045]: true Feb 13 19:50:13.712852 coreos-metadata[1994]: Feb 13 19:50:13.701 INFO Fetch successful Feb 13 19:50:13.712852 coreos-metadata[1994]: Feb 13 19:50:13.701 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:50:13.712852 coreos-metadata[1994]: Feb 13 19:50:13.706 INFO Fetch failed with 404: resource not found Feb 13 19:50:13.712852 coreos-metadata[1994]: Feb 13 19:50:13.709 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:50:13.722928 coreos-metadata[1994]: Feb 13 19:50:13.717 INFO Fetch successful Feb 13 19:50:13.722928 coreos-metadata[1994]: Feb 13 19:50:13.717 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:50:13.736213 coreos-metadata[1994]: Feb 13 19:50:13.731 INFO Fetch successful Feb 13 19:50:13.736213 coreos-metadata[1994]: Feb 13 19:50:13.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:50:13.731384 (ntainerd)[2050]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:50:13.746244 coreos-metadata[1994]: Feb 13 19:50:13.743 INFO Fetch successful Feb 13 19:50:13.746244 coreos-metadata[1994]: Feb 13 19:50:13.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:50:13.746244 coreos-metadata[1994]: Feb 13 19:50:13.743 INFO Fetch successful Feb 13 19:50:13.746244 coreos-metadata[1994]: Feb 13 19:50:13.744 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:50:13.746244 coreos-metadata[1994]: Feb 13 19:50:13.744 INFO Fetch successful Feb 13 19:50:13.777888 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:50:13.785675 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:50:13.785960 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:50:13.804225 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:50:13.807014 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:50:13.807061 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:50:13.810517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:50:13.828690 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:50:13.833703 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:50:13.855068 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:50:13.888722 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:50:13.908934 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:50:13.913580 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:50:13.923316 systemd-logind[2015]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:50:13.923363 systemd-logind[2015]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:50:13.927511 systemd-logind[2015]: New seat seat0. Feb 13 19:50:13.931080 extend-filesystems[2047]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:50:13.931080 extend-filesystems[2047]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:50:13.931080 extend-filesystems[2047]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:50:13.948645 extend-filesystems[1997]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:50:13.941353 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:50:13.942555 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:50:13.956083 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:50:14.089492 bash[2113]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:14.095524 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:50:14.117868 systemd[1]: Starting sshkeys.service... Feb 13 19:50:14.161924 amazon-ssm-agent[2074]: Initializing new seelog logger Feb 13 19:50:14.161924 amazon-ssm-agent[2074]: New Seelog Logger Creation Complete Feb 13 19:50:14.161924 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.161924 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.161924 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 processing appconfig overrides Feb 13 19:50:14.163254 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.163385 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.163714 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 processing appconfig overrides Feb 13 19:50:14.166804 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.167363 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.167707 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 processing appconfig overrides Feb 13 19:50:14.168044 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO Proxy environment variables: Feb 13 19:50:14.174338 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:50:14.182810 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.182810 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:14.182810 amazon-ssm-agent[2074]: 2025/02/13 19:50:14 processing appconfig overrides Feb 13 19:50:14.184560 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:50:14.203936 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2100) Feb 13 19:50:14.283947 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO https_proxy: Feb 13 19:50:14.385313 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO http_proxy: Feb 13 19:50:14.402122 locksmithd[2071]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:50:14.476086 dbus-daemon[1995]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:50:14.476423 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:50:14.484779 dbus-daemon[1995]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2070 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:14.490263 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO no_proxy: Feb 13 19:50:14.515982 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:50:14.553522 sshd_keygen[2046]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:50:14.587120 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:50:14.594259 polkitd[2174]: Started polkitd version 121 Feb 13 19:50:14.641095 containerd[2050]: time="2025-02-13T19:50:14.639121645Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:50:14.670510 coreos-metadata[2123]: Feb 13 19:50:14.669 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:14.674744 coreos-metadata[2123]: Feb 13 19:50:14.671 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:50:14.675045 coreos-metadata[2123]: Feb 13 19:50:14.674 INFO Fetch successful Feb 13 19:50:14.675045 coreos-metadata[2123]: Feb 13 19:50:14.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:50:14.676750 coreos-metadata[2123]: Feb 13 19:50:14.675 INFO Fetch successful Feb 13 19:50:14.677302 polkitd[2174]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:50:14.677419 polkitd[2174]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:50:14.684954 unknown[2123]: wrote ssh authorized keys file for user: core Feb 13 19:50:14.691567 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:50:14.693463 polkitd[2174]: Finished loading, compiling and executing 2 rules Feb 13 19:50:14.708298 dbus-daemon[1995]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:50:14.712794 polkitd[2174]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:50:14.743515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:50:14.758187 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:50:14.790979 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO Agent will take identity from EC2 Feb 13 19:50:14.792462 update-ssh-keys[2217]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:14.793923 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:50:14.809265 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:50:14.831712 systemd[1]: Finished sshkeys.service. Feb 13 19:50:14.855637 systemd-hostnamed[2070]: Hostname set to (transient) Feb 13 19:50:14.856889 systemd-resolved[1933]: System hostname changed to 'ip-172-31-17-39'. Feb 13 19:50:14.889535 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:50:14.890085 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:50:14.894983 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:14.909421 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:50:14.925070 containerd[2050]: time="2025-02-13T19:50:14.921626330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.941820914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.941945186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.942023714Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.943142006Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.943215434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.943392110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:14.950881 containerd[2050]: time="2025-02-13T19:50:14.943424726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.951725 containerd[2050]: time="2025-02-13T19:50:14.951659966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:14.951876 containerd[2050]: time="2025-02-13T19:50:14.951845702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.952018 containerd[2050]: time="2025-02-13T19:50:14.951987818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:14.952366 containerd[2050]: time="2025-02-13T19:50:14.952141070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.955935 containerd[2050]: time="2025-02-13T19:50:14.954926774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.955935 containerd[2050]: time="2025-02-13T19:50:14.955431734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:14.955935 containerd[2050]: time="2025-02-13T19:50:14.955755098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:14.955935 containerd[2050]: time="2025-02-13T19:50:14.955792658Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:50:14.961211 containerd[2050]: time="2025-02-13T19:50:14.960637598Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:50:14.961211 containerd[2050]: time="2025-02-13T19:50:14.960967514Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:50:14.989672 containerd[2050]: time="2025-02-13T19:50:14.989557250Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:50:14.990643 containerd[2050]: time="2025-02-13T19:50:14.990003470Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:50:14.990643 containerd[2050]: time="2025-02-13T19:50:14.990064166Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:50:14.990643 containerd[2050]: time="2025-02-13T19:50:14.990099722Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:50:14.990643 containerd[2050]: time="2025-02-13T19:50:14.990138650Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:50:14.990643 containerd[2050]: time="2025-02-13T19:50:14.990427706Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:50:14.994349 containerd[2050]: time="2025-02-13T19:50:14.993248090Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.994846022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.994917950Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.994954934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.994988798Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995026142Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995070998Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995105858Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995139134Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995169110Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995199122Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995227826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995277866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995313530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.998395 containerd[2050]: time="2025-02-13T19:50:14.995369918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999091 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995407358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995437178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995477666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995512958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995548118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995578922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995628494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995659286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995687642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995718830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995757482Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995802254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995845010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999164 containerd[2050]: time="2025-02-13T19:50:14.995873294Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.995997734Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996035942Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996065042Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996095666Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996144518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996189626Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996215090Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:50:14.999728 containerd[2050]: time="2025-02-13T19:50:14.996240290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:50:15.006360 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:50:15.019296 containerd[2050]: time="2025-02-13T19:50:15.014170174Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:50:15.019296 containerd[2050]: time="2025-02-13T19:50:15.014330758Z" level=info msg="Connect containerd service" Feb 13 19:50:15.019296 containerd[2050]: time="2025-02-13T19:50:15.014412454Z" level=info msg="using legacy CRI server" Feb 13 19:50:15.019296 containerd[2050]: time="2025-02-13T19:50:15.014432254Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:50:15.019296 containerd[2050]: time="2025-02-13T19:50:15.014638654Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:50:15.022935 containerd[2050]: time="2025-02-13T19:50:15.020128655Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:50:15.022935 containerd[2050]: time="2025-02-13T19:50:15.022095443Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:50:15.022935 containerd[2050]: time="2025-02-13T19:50:15.022212659Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:50:15.031637 containerd[2050]: time="2025-02-13T19:50:15.025090163Z" level=info msg="Start subscribing containerd event" Feb 13 19:50:15.032031 containerd[2050]: time="2025-02-13T19:50:15.031971683Z" level=info msg="Start recovering state" Feb 13 19:50:15.032188 containerd[2050]: time="2025-02-13T19:50:15.032151371Z" level=info msg="Start event monitor" Feb 13 19:50:15.032248 containerd[2050]: time="2025-02-13T19:50:15.032186891Z" level=info msg="Start snapshots syncer" Feb 13 19:50:15.032248 containerd[2050]: time="2025-02-13T19:50:15.032213747Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:50:15.032248 containerd[2050]: time="2025-02-13T19:50:15.032233031Z" level=info msg="Start streaming server" Feb 13 19:50:15.032413 containerd[2050]: time="2025-02-13T19:50:15.032394263Z" level=info msg="containerd successfully booted in 0.410831s" Feb 13 19:50:15.036755 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:50:15.043293 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:50:15.047093 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:50:15.051809 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:50:15.096093 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:15.197137 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:50:15.297536 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:50:15.397796 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:50:15.497852 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:50:15.548072 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [Registrar] Starting registrar module Feb 13 19:50:15.548310 amazon-ssm-agent[2074]: 2025-02-13 19:50:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:50:15.548310 amazon-ssm-agent[2074]: 2025-02-13 19:50:15 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:50:15.548673 amazon-ssm-agent[2074]: 2025-02-13 19:50:15 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:50:15.548673 amazon-ssm-agent[2074]: 2025-02-13 19:50:15 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:50:15.548673 amazon-ssm-agent[2074]: 2025-02-13 19:50:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:50:15.598196 amazon-ssm-agent[2074]: 2025-02-13 19:50:15 INFO [CredentialRefresher] Next credential rotation will be in 31.408312844133334 minutes Feb 13 19:50:15.856258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:15.860330 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:50:15.865079 systemd[1]: Startup finished in 9.884s (kernel) + 9.476s (userspace) = 19.361s. Feb 13 19:50:15.868955 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:50:16.575320 amazon-ssm-agent[2074]: 2025-02-13 19:50:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:50:16.676615 amazon-ssm-agent[2074]: 2025-02-13 19:50:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2285) started Feb 13 19:50:16.778741 amazon-ssm-agent[2074]: 2025-02-13 19:50:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:50:16.876343 kubelet[2273]: E0213 19:50:16.876158 2273 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:50:16.881802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:50:16.882795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:50:20.794962 systemd-resolved[1933]: Clock change detected. Flushing caches. Feb 13 19:50:21.174486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:50:21.185521 systemd[1]: Started sshd@0-172.31.17.39:22-139.178.89.65:38538.service - OpenSSH per-connection server daemon (139.178.89.65:38538). Feb 13 19:50:21.407338 sshd[2298]: Accepted publickey for core from 139.178.89.65 port 38538 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:21.411394 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:21.428909 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:50:21.438517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:50:21.445068 systemd-logind[2015]: New session 1 of user core. Feb 13 19:50:21.470421 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:50:21.482726 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:50:21.494472 (systemd)[2304]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:50:21.717090 systemd[2304]: Queued start job for default target default.target. Feb 13 19:50:21.717786 systemd[2304]: Created slice app.slice - User Application Slice. Feb 13 19:50:21.717838 systemd[2304]: Reached target paths.target - Paths. Feb 13 19:50:21.717884 systemd[2304]: Reached target timers.target - Timers. Feb 13 19:50:21.729171 systemd[2304]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:50:21.741735 systemd[2304]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:50:21.741853 systemd[2304]: Reached target sockets.target - Sockets. Feb 13 19:50:21.741888 systemd[2304]: Reached target basic.target - Basic System. Feb 13 19:50:21.741981 systemd[2304]: Reached target default.target - Main User Target. Feb 13 19:50:21.742221 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:50:21.744747 systemd[2304]: Startup finished in 237ms. Feb 13 19:50:21.760773 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:50:21.920565 systemd[1]: Started sshd@1-172.31.17.39:22-139.178.89.65:38552.service - OpenSSH per-connection server daemon (139.178.89.65:38552). Feb 13 19:50:22.099678 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 38552 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:22.102192 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:22.110753 systemd-logind[2015]: New session 2 of user core. Feb 13 19:50:22.122533 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:50:22.251372 sshd[2316]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:22.256877 systemd-logind[2015]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:50:22.258352 systemd[1]: sshd@1-172.31.17.39:22-139.178.89.65:38552.service: Deactivated successfully. Feb 13 19:50:22.264863 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:50:22.269472 systemd-logind[2015]: Removed session 2. Feb 13 19:50:22.287476 systemd[1]: Started sshd@2-172.31.17.39:22-139.178.89.65:38568.service - OpenSSH per-connection server daemon (139.178.89.65:38568). Feb 13 19:50:22.454818 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 38568 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:22.457665 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:22.465112 systemd-logind[2015]: New session 3 of user core. Feb 13 19:50:22.472812 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:50:22.596375 sshd[2324]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:22.601634 systemd-logind[2015]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:50:22.603990 systemd[1]: sshd@2-172.31.17.39:22-139.178.89.65:38568.service: Deactivated successfully. Feb 13 19:50:22.610914 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:50:22.613604 systemd-logind[2015]: Removed session 3. Feb 13 19:50:22.635524 systemd[1]: Started sshd@3-172.31.17.39:22-139.178.89.65:38570.service - OpenSSH per-connection server daemon (139.178.89.65:38570). Feb 13 19:50:22.807949 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 38570 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:22.810874 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:22.821458 systemd-logind[2015]: New session 4 of user core. Feb 13 19:50:22.832602 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:50:22.965465 sshd[2332]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:22.972110 systemd-logind[2015]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:50:22.973757 systemd[1]: sshd@3-172.31.17.39:22-139.178.89.65:38570.service: Deactivated successfully. Feb 13 19:50:22.979945 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:50:22.981507 systemd-logind[2015]: Removed session 4. Feb 13 19:50:22.994535 systemd[1]: Started sshd@4-172.31.17.39:22-139.178.89.65:38578.service - OpenSSH per-connection server daemon (139.178.89.65:38578). Feb 13 19:50:23.173310 sshd[2340]: Accepted publickey for core from 139.178.89.65 port 38578 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:23.175210 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:23.185398 systemd-logind[2015]: New session 5 of user core. Feb 13 19:50:23.188572 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:50:23.328995 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:50:23.330406 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:23.350074 sudo[2344]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:23.374504 sshd[2340]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:23.381524 systemd[1]: sshd@4-172.31.17.39:22-139.178.89.65:38578.service: Deactivated successfully. Feb 13 19:50:23.388104 systemd-logind[2015]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:50:23.389550 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:50:23.392573 systemd-logind[2015]: Removed session 5. Feb 13 19:50:23.407518 systemd[1]: Started sshd@5-172.31.17.39:22-139.178.89.65:38590.service - OpenSSH per-connection server daemon (139.178.89.65:38590). Feb 13 19:50:23.573911 sshd[2349]: Accepted publickey for core from 139.178.89.65 port 38590 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:23.576780 sshd[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:23.583951 systemd-logind[2015]: New session 6 of user core. Feb 13 19:50:23.593713 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:50:23.701992 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:50:23.703363 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:23.710592 sudo[2354]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:23.721324 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:50:23.722010 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:23.752664 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:23.755708 auditctl[2357]: No rules Feb 13 19:50:23.756707 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:50:23.757358 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:23.774730 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:23.822084 augenrules[2376]: No rules Feb 13 19:50:23.823825 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:23.829937 sudo[2353]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:23.854391 sshd[2349]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:23.862697 systemd[1]: sshd@5-172.31.17.39:22-139.178.89.65:38590.service: Deactivated successfully. Feb 13 19:50:23.864366 systemd-logind[2015]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:50:23.869813 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:50:23.871456 systemd-logind[2015]: Removed session 6. Feb 13 19:50:23.886550 systemd[1]: Started sshd@6-172.31.17.39:22-139.178.89.65:38596.service - OpenSSH per-connection server daemon (139.178.89.65:38596). Feb 13 19:50:24.059004 sshd[2385]: Accepted publickey for core from 139.178.89.65 port 38596 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:24.061756 sshd[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:24.071637 systemd-logind[2015]: New session 7 of user core. Feb 13 19:50:24.078592 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:50:24.189128 sudo[2389]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:50:24.190494 sudo[2389]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:25.388900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:25.403510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:25.451478 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-7.scope)... Feb 13 19:50:25.451506 systemd[1]: Reloading... Feb 13 19:50:25.688066 zram_generator::config[2467]: No configuration found. Feb 13 19:50:25.970765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:26.135894 systemd[1]: Reloading finished in 683 ms. Feb 13 19:50:26.240262 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:50:26.240498 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:50:26.241235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:26.250745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:26.580381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:26.593787 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:50:26.680601 kubelet[2542]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:26.680601 kubelet[2542]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:50:26.681174 kubelet[2542]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:50:26.682433 kubelet[2542]: I0213 19:50:26.682323 2542 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:50:28.221060 kubelet[2542]: I0213 19:50:28.219382 2542 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:50:28.221060 kubelet[2542]: I0213 19:50:28.219433 2542 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:50:28.221060 kubelet[2542]: I0213 19:50:28.219757 2542 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:50:28.254845 kubelet[2542]: I0213 19:50:28.254786 2542 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:50:28.270521 kubelet[2542]: I0213 19:50:28.270459 2542 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:50:28.271413 kubelet[2542]: I0213 19:50:28.271341 2542 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:50:28.271699 kubelet[2542]: I0213 19:50:28.271410 2542 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:50:28.271891 kubelet[2542]: I0213 19:50:28.271745 2542 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:50:28.271891 kubelet[2542]: I0213 19:50:28.271769 2542 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:50:28.272098 kubelet[2542]: I0213 19:50:28.272062 2542 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:28.273771 kubelet[2542]: I0213 19:50:28.273702 2542 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:50:28.273771 kubelet[2542]: I0213 19:50:28.273759 2542 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:50:28.273937 kubelet[2542]: I0213 19:50:28.273862 2542 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:50:28.273937 kubelet[2542]: I0213 19:50:28.273932 2542 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:50:28.276804 kubelet[2542]: E0213 19:50:28.275885 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:28.276804 kubelet[2542]: E0213 19:50:28.275983 2542 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:28.277736 kubelet[2542]: I0213 19:50:28.277694 2542 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:50:28.278204 kubelet[2542]: I0213 19:50:28.278159 2542 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:50:28.278307 kubelet[2542]: W0213 19:50:28.278274 2542 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:50:28.279651 kubelet[2542]: I0213 19:50:28.279526 2542 server.go:1264] "Started kubelet" Feb 13 19:50:28.283986 kubelet[2542]: I0213 19:50:28.283367 2542 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:50:28.289106 kubelet[2542]: I0213 19:50:28.288744 2542 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:50:28.291967 kubelet[2542]: I0213 19:50:28.290858 2542 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:50:28.292717 kubelet[2542]: I0213 19:50:28.292613 2542 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:50:28.293102 kubelet[2542]: I0213 19:50:28.293047 2542 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:50:28.300382 kubelet[2542]: I0213 19:50:28.300315 2542 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:50:28.301616 kubelet[2542]: E0213 19:50:28.301383 2542 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.39.1823dc69d9d58f99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.39,UID:172.31.17.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.39,},FirstTimestamp:2025-02-13 19:50:28.279472025 +0000 UTC m=+1.679319586,LastTimestamp:2025-02-13 19:50:28.279472025 +0000 UTC m=+1.679319586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.39,}" Feb 13 19:50:28.302355 kubelet[2542]: I0213 19:50:28.302302 2542 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:50:28.302649 kubelet[2542]: I0213 19:50:28.302467 2542 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:50:28.310335 kubelet[2542]: I0213 19:50:28.310250 2542 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:50:28.310624 kubelet[2542]: I0213 19:50:28.310462 2542 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:50:28.315001 kubelet[2542]: I0213 19:50:28.314936 2542 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:50:28.316317 kubelet[2542]: E0213 19:50:28.316254 2542 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:50:28.329846 kubelet[2542]: W0213 19:50:28.328398 2542 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:50:28.329846 kubelet[2542]: E0213 19:50:28.328473 2542 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:50:28.329846 kubelet[2542]: W0213 19:50:28.328541 2542 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:50:28.329846 kubelet[2542]: E0213 19:50:28.328567 2542 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:50:28.329846 kubelet[2542]: E0213 19:50:28.328625 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.39\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:50:28.336085 kubelet[2542]: E0213 19:50:28.335722 2542 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.39.1823dc69dc06784d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.39,UID:172.31.17.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.17.39,},FirstTimestamp:2025-02-13 19:50:28.316231757 +0000 UTC m=+1.716079330,LastTimestamp:2025-02-13 19:50:28.316231757 +0000 UTC m=+1.716079330,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.39,}" Feb 13 19:50:28.341106 kubelet[2542]: W0213 19:50:28.339413 2542 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.39" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:50:28.341106 kubelet[2542]: E0213 19:50:28.339493 2542 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.39" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:50:28.386376 kubelet[2542]: I0213 19:50:28.386323 2542 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:50:28.386376 kubelet[2542]: I0213 19:50:28.386365 2542 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:50:28.386659 kubelet[2542]: I0213 19:50:28.386399 2542 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:50:28.387512 kubelet[2542]: E0213 19:50:28.387278 2542 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.39.1823dc69e00ff955 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.39,UID:172.31.17.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.17.39 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.17.39,},FirstTimestamp:2025-02-13 19:50:28.383963477 +0000 UTC m=+1.783811026,LastTimestamp:2025-02-13 19:50:28.383963477 +0000 UTC m=+1.783811026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.39,}" Feb 13 19:50:28.389134 kubelet[2542]: I0213 19:50:28.389070 2542 policy_none.go:49] "None policy: Start" Feb 13 19:50:28.390990 kubelet[2542]: I0213 19:50:28.390950 2542 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:50:28.391518 kubelet[2542]: I0213 19:50:28.391440 2542 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:50:28.403056 kubelet[2542]: I0213 19:50:28.401996 2542 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:50:28.403056 kubelet[2542]: I0213 19:50:28.402360 2542 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:50:28.406126 kubelet[2542]: I0213 19:50:28.406085 2542 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:50:28.411845 kubelet[2542]: I0213 19:50:28.411734 2542 kubelet_node_status.go:73] "Attempting to register node" node="172.31.17.39" Feb 13 19:50:28.417578 kubelet[2542]: E0213 19:50:28.417344 2542 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.39\" not found" Feb 13 19:50:28.423647 kubelet[2542]: I0213 19:50:28.423569 2542 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:50:28.427592 kubelet[2542]: I0213 19:50:28.427518 2542 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:50:28.427722 kubelet[2542]: I0213 19:50:28.427614 2542 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:50:28.427722 kubelet[2542]: I0213 19:50:28.427655 2542 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:50:28.427862 kubelet[2542]: E0213 19:50:28.427728 2542 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:50:28.431639 kubelet[2542]: I0213 19:50:28.431398 2542 kubelet_node_status.go:76] "Successfully registered node" node="172.31.17.39" Feb 13 19:50:28.507467 kubelet[2542]: E0213 19:50:28.507323 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:28.608488 kubelet[2542]: E0213 19:50:28.608385 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:28.629749 sudo[2389]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:28.654413 sshd[2385]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:28.662373 systemd[1]: sshd@6-172.31.17.39:22-139.178.89.65:38596.service: Deactivated successfully. Feb 13 19:50:28.668845 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:50:28.670821 systemd-logind[2015]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:50:28.674423 systemd-logind[2015]: Removed session 7. Feb 13 19:50:28.709097 kubelet[2542]: E0213 19:50:28.709003 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:28.809354 kubelet[2542]: E0213 19:50:28.809207 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:28.909910 kubelet[2542]: E0213 19:50:28.909840 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.010653 kubelet[2542]: E0213 19:50:29.010571 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.111287 kubelet[2542]: E0213 19:50:29.111133 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.211965 kubelet[2542]: E0213 19:50:29.211897 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.236214 kubelet[2542]: I0213 19:50:29.236153 2542 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:50:29.236841 kubelet[2542]: W0213 19:50:29.236399 2542 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:50:29.276881 kubelet[2542]: E0213 19:50:29.276807 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:29.313036 kubelet[2542]: E0213 19:50:29.312972 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.414103 kubelet[2542]: E0213 19:50:29.413935 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.515074 kubelet[2542]: E0213 19:50:29.514989 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.615829 kubelet[2542]: E0213 19:50:29.615766 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.39\" not found" Feb 13 19:50:29.717449 kubelet[2542]: I0213 19:50:29.716833 2542 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:50:29.717712 containerd[2050]: time="2025-02-13T19:50:29.717668552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:50:29.718730 kubelet[2542]: I0213 19:50:29.718675 2542 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:50:30.275793 kubelet[2542]: I0213 19:50:30.275719 2542 apiserver.go:52] "Watching apiserver" Feb 13 19:50:30.277123 kubelet[2542]: E0213 19:50:30.277079 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:30.283953 kubelet[2542]: I0213 19:50:30.283857 2542 topology_manager.go:215] "Topology Admit Handler" podUID="4a1b44e4-849f-4ce8-96e3-db557a4b4fc6" podNamespace="calico-system" podName="calico-node-hkjt7" Feb 13 19:50:30.286046 kubelet[2542]: I0213 19:50:30.284101 2542 topology_manager.go:215] "Topology Admit Handler" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" podNamespace="calico-system" podName="csi-node-driver-wzg8v" Feb 13 19:50:30.286046 kubelet[2542]: I0213 19:50:30.284328 2542 topology_manager.go:215] "Topology Admit Handler" podUID="e370b56d-2e1f-4a73-9ed9-1009a112b08a" podNamespace="kube-system" podName="kube-proxy-vhdsg" Feb 13 19:50:30.286046 kubelet[2542]: E0213 19:50:30.285996 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:30.303333 kubelet[2542]: I0213 19:50:30.303295 2542 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:50:30.315767 kubelet[2542]: I0213 19:50:30.315716 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ac55b41-af13-4faf-9c88-6fe38b62f075-socket-dir\") pod \"csi-node-driver-wzg8v\" (UID: \"7ac55b41-af13-4faf-9c88-6fe38b62f075\") " pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:30.316055 kubelet[2542]: I0213 19:50:30.316007 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ac55b41-af13-4faf-9c88-6fe38b62f075-registration-dir\") pod \"csi-node-driver-wzg8v\" (UID: \"7ac55b41-af13-4faf-9c88-6fe38b62f075\") " pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:30.316259 kubelet[2542]: I0213 19:50:30.316216 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k49c9\" (UniqueName: \"kubernetes.io/projected/7ac55b41-af13-4faf-9c88-6fe38b62f075-kube-api-access-k49c9\") pod \"csi-node-driver-wzg8v\" (UID: \"7ac55b41-af13-4faf-9c88-6fe38b62f075\") " pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:30.316433 kubelet[2542]: I0213 19:50:30.316392 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e370b56d-2e1f-4a73-9ed9-1009a112b08a-lib-modules\") pod \"kube-proxy-vhdsg\" (UID: \"e370b56d-2e1f-4a73-9ed9-1009a112b08a\") " pod="kube-system/kube-proxy-vhdsg" Feb 13 19:50:30.316624 kubelet[2542]: I0213 19:50:30.316587 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-policysync\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.316758 kubelet[2542]: I0213 19:50:30.316736 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-var-lib-calico\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.316903 kubelet[2542]: I0213 19:50:30.316881 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-cni-bin-dir\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.317068 kubelet[2542]: I0213 19:50:30.317047 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-cni-log-dir\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.317230 kubelet[2542]: I0213 19:50:30.317208 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-xtables-lock\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.317409 kubelet[2542]: I0213 19:50:30.317371 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-tigera-ca-bundle\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.317543 kubelet[2542]: I0213 19:50:30.317521 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-node-certs\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.317687 kubelet[2542]: I0213 19:50:30.317665 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-flexvol-driver-host\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.317830 kubelet[2542]: I0213 19:50:30.317808 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e370b56d-2e1f-4a73-9ed9-1009a112b08a-kube-proxy\") pod \"kube-proxy-vhdsg\" (UID: \"e370b56d-2e1f-4a73-9ed9-1009a112b08a\") " pod="kube-system/kube-proxy-vhdsg" Feb 13 19:50:30.317974 kubelet[2542]: I0213 19:50:30.317951 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpn8\" (UniqueName: \"kubernetes.io/projected/e370b56d-2e1f-4a73-9ed9-1009a112b08a-kube-api-access-xlpn8\") pod \"kube-proxy-vhdsg\" (UID: \"e370b56d-2e1f-4a73-9ed9-1009a112b08a\") " pod="kube-system/kube-proxy-vhdsg" Feb 13 19:50:30.318162 kubelet[2542]: I0213 19:50:30.318138 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-lib-modules\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.318307 kubelet[2542]: I0213 19:50:30.318285 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-var-run-calico\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.318452 kubelet[2542]: I0213 19:50:30.318428 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ac55b41-af13-4faf-9c88-6fe38b62f075-varrun\") pod \"csi-node-driver-wzg8v\" (UID: \"7ac55b41-af13-4faf-9c88-6fe38b62f075\") " pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:30.318598 kubelet[2542]: I0213 19:50:30.318573 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ac55b41-af13-4faf-9c88-6fe38b62f075-kubelet-dir\") pod \"csi-node-driver-wzg8v\" (UID: \"7ac55b41-af13-4faf-9c88-6fe38b62f075\") " pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:30.318765 kubelet[2542]: I0213 19:50:30.318721 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-cni-net-dir\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.318932 kubelet[2542]: I0213 19:50:30.318888 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg8t7\" (UniqueName: \"kubernetes.io/projected/4a1b44e4-849f-4ce8-96e3-db557a4b4fc6-kube-api-access-vg8t7\") pod \"calico-node-hkjt7\" (UID: \"4a1b44e4-849f-4ce8-96e3-db557a4b4fc6\") " pod="calico-system/calico-node-hkjt7" Feb 13 19:50:30.319127 kubelet[2542]: I0213 19:50:30.319069 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e370b56d-2e1f-4a73-9ed9-1009a112b08a-xtables-lock\") pod \"kube-proxy-vhdsg\" (UID: \"e370b56d-2e1f-4a73-9ed9-1009a112b08a\") " pod="kube-system/kube-proxy-vhdsg" Feb 13 19:50:30.425979 kubelet[2542]: E0213 19:50:30.425819 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.425979 kubelet[2542]: W0213 19:50:30.425866 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.425979 kubelet[2542]: E0213 19:50:30.425907 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.438452 kubelet[2542]: E0213 19:50:30.438368 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.438452 kubelet[2542]: W0213 19:50:30.438402 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.439215 kubelet[2542]: E0213 19:50:30.438853 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.462645 kubelet[2542]: E0213 19:50:30.460449 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.462645 kubelet[2542]: W0213 19:50:30.460490 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.462645 kubelet[2542]: E0213 19:50:30.460587 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.473215 kubelet[2542]: E0213 19:50:30.473179 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.473413 kubelet[2542]: W0213 19:50:30.473385 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.473620 kubelet[2542]: E0213 19:50:30.473595 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.480592 kubelet[2542]: E0213 19:50:30.480547 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:30.480592 kubelet[2542]: W0213 19:50:30.480584 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:30.480803 kubelet[2542]: E0213 19:50:30.480615 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:30.591528 containerd[2050]: time="2025-02-13T19:50:30.591257444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhdsg,Uid:e370b56d-2e1f-4a73-9ed9-1009a112b08a,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:30.594099 containerd[2050]: time="2025-02-13T19:50:30.593618600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkjt7,Uid:4a1b44e4-849f-4ce8-96e3-db557a4b4fc6,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:31.172952 containerd[2050]: time="2025-02-13T19:50:31.172885747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:31.175462 containerd[2050]: time="2025-02-13T19:50:31.175250191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:31.177230 containerd[2050]: time="2025-02-13T19:50:31.177125083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:50:31.177230 containerd[2050]: time="2025-02-13T19:50:31.177196351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:50:31.178114 containerd[2050]: time="2025-02-13T19:50:31.178044475Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:31.183979 containerd[2050]: time="2025-02-13T19:50:31.183897211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:50:31.186079 containerd[2050]: time="2025-02-13T19:50:31.185708635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.822987ms" Feb 13 19:50:31.190075 containerd[2050]: time="2025-02-13T19:50:31.189980143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.587159ms" Feb 13 19:50:31.277557 kubelet[2542]: E0213 19:50:31.277460 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:31.391064 containerd[2050]: time="2025-02-13T19:50:31.390357848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:31.391064 containerd[2050]: time="2025-02-13T19:50:31.390944936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:31.391985 containerd[2050]: time="2025-02-13T19:50:31.391105604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:31.393179 containerd[2050]: time="2025-02-13T19:50:31.392995316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:31.394753 containerd[2050]: time="2025-02-13T19:50:31.394487768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:31.394753 containerd[2050]: time="2025-02-13T19:50:31.394650320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:31.395263 containerd[2050]: time="2025-02-13T19:50:31.395134640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:31.396586 containerd[2050]: time="2025-02-13T19:50:31.396483476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:31.441917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070454202.mount: Deactivated successfully. Feb 13 19:50:31.535094 systemd[1]: run-containerd-runc-k8s.io-7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab-runc.jVAA9S.mount: Deactivated successfully. Feb 13 19:50:31.601539 containerd[2050]: time="2025-02-13T19:50:31.601490901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhdsg,Uid:e370b56d-2e1f-4a73-9ed9-1009a112b08a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e62a496627fd7d44e212586543461120ee3bea15dd55488b6a0a088b760e3b52\"" Feb 13 19:50:31.609840 containerd[2050]: time="2025-02-13T19:50:31.609402765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:50:31.613364 containerd[2050]: time="2025-02-13T19:50:31.613298037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkjt7,Uid:4a1b44e4-849f-4ce8-96e3-db557a4b4fc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\"" Feb 13 19:50:32.279482 kubelet[2542]: E0213 19:50:32.279418 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:32.428353 kubelet[2542]: E0213 19:50:32.428169 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:32.902088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925891951.mount: Deactivated successfully. Feb 13 19:50:33.280994 kubelet[2542]: E0213 19:50:33.280831 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:33.417119 containerd[2050]: time="2025-02-13T19:50:33.416111206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:33.418731 containerd[2050]: time="2025-02-13T19:50:33.418642126Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:50:33.420471 containerd[2050]: time="2025-02-13T19:50:33.420386518Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:33.428902 containerd[2050]: time="2025-02-13T19:50:33.428846458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:33.433006 containerd[2050]: time="2025-02-13T19:50:33.432883546Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.823417109s" Feb 13 19:50:33.435008 containerd[2050]: time="2025-02-13T19:50:33.434783878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:50:33.439247 containerd[2050]: time="2025-02-13T19:50:33.439152406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:50:33.441666 containerd[2050]: time="2025-02-13T19:50:33.441406930Z" level=info msg="CreateContainer within sandbox \"e62a496627fd7d44e212586543461120ee3bea15dd55488b6a0a088b760e3b52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:50:33.466504 containerd[2050]: time="2025-02-13T19:50:33.466429486Z" level=info msg="CreateContainer within sandbox \"e62a496627fd7d44e212586543461120ee3bea15dd55488b6a0a088b760e3b52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a361ab370a96fc03af945e72b3c6ee74e2cd67e7d4c14771b6a49198a7bcd05\"" Feb 13 19:50:33.468510 containerd[2050]: time="2025-02-13T19:50:33.468255202Z" level=info msg="StartContainer for \"7a361ab370a96fc03af945e72b3c6ee74e2cd67e7d4c14771b6a49198a7bcd05\"" Feb 13 19:50:33.573857 containerd[2050]: time="2025-02-13T19:50:33.573522455Z" level=info msg="StartContainer for \"7a361ab370a96fc03af945e72b3c6ee74e2cd67e7d4c14771b6a49198a7bcd05\" returns successfully" Feb 13 19:50:34.281224 kubelet[2542]: E0213 19:50:34.281149 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:34.430575 kubelet[2542]: E0213 19:50:34.429405 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:34.507451 kubelet[2542]: I0213 19:50:34.507326 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhdsg" podStartSLOduration=4.677712191 podStartE2EDuration="6.507301752s" podCreationTimestamp="2025-02-13 19:50:28 +0000 UTC" firstStartedPulling="2025-02-13 19:50:31.608118837 +0000 UTC m=+5.007966398" lastFinishedPulling="2025-02-13 19:50:33.437708398 +0000 UTC m=+6.837555959" observedRunningTime="2025-02-13 19:50:34.507109296 +0000 UTC m=+7.906956893" watchObservedRunningTime="2025-02-13 19:50:34.507301752 +0000 UTC m=+7.907149337" Feb 13 19:50:34.535510 kubelet[2542]: E0213 19:50:34.534970 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.535510 kubelet[2542]: W0213 19:50:34.535079 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.535510 kubelet[2542]: E0213 19:50:34.535116 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.536607 kubelet[2542]: E0213 19:50:34.536451 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.536607 kubelet[2542]: W0213 19:50:34.536485 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.536607 kubelet[2542]: E0213 19:50:34.536543 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.537723 kubelet[2542]: E0213 19:50:34.537462 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.537723 kubelet[2542]: W0213 19:50:34.537495 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.537723 kubelet[2542]: E0213 19:50:34.537525 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.538460 kubelet[2542]: E0213 19:50:34.538418 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.538604 kubelet[2542]: W0213 19:50:34.538459 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.538604 kubelet[2542]: E0213 19:50:34.538496 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.539006 kubelet[2542]: E0213 19:50:34.538970 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.539198 kubelet[2542]: W0213 19:50:34.539007 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.539198 kubelet[2542]: E0213 19:50:34.539077 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.539536 kubelet[2542]: E0213 19:50:34.539496 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.539638 kubelet[2542]: W0213 19:50:34.539535 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.539638 kubelet[2542]: E0213 19:50:34.539564 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.540005 kubelet[2542]: E0213 19:50:34.539966 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.540005 kubelet[2542]: W0213 19:50:34.540003 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.540201 kubelet[2542]: E0213 19:50:34.540067 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.540551 kubelet[2542]: E0213 19:50:34.540497 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.540551 kubelet[2542]: W0213 19:50:34.540536 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.540729 kubelet[2542]: E0213 19:50:34.540568 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.541059 kubelet[2542]: E0213 19:50:34.540977 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.541059 kubelet[2542]: W0213 19:50:34.541042 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.541260 kubelet[2542]: E0213 19:50:34.541075 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.541598 kubelet[2542]: E0213 19:50:34.541537 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.541598 kubelet[2542]: W0213 19:50:34.541579 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.541806 kubelet[2542]: E0213 19:50:34.541616 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.542187 kubelet[2542]: E0213 19:50:34.542142 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.542187 kubelet[2542]: W0213 19:50:34.542182 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.542339 kubelet[2542]: E0213 19:50:34.542215 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.542731 kubelet[2542]: E0213 19:50:34.542691 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.542731 kubelet[2542]: W0213 19:50:34.542726 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.542929 kubelet[2542]: E0213 19:50:34.542763 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.543300 kubelet[2542]: E0213 19:50:34.543260 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.543300 kubelet[2542]: W0213 19:50:34.543297 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.543444 kubelet[2542]: E0213 19:50:34.543330 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.543746 kubelet[2542]: E0213 19:50:34.543714 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.543839 kubelet[2542]: W0213 19:50:34.543743 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.543839 kubelet[2542]: E0213 19:50:34.543768 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.544160 kubelet[2542]: E0213 19:50:34.544130 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.544227 kubelet[2542]: W0213 19:50:34.544158 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.544227 kubelet[2542]: E0213 19:50:34.544182 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.544586 kubelet[2542]: E0213 19:50:34.544547 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.544586 kubelet[2542]: W0213 19:50:34.544583 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.544765 kubelet[2542]: E0213 19:50:34.544613 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.545064 kubelet[2542]: E0213 19:50:34.544995 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.545064 kubelet[2542]: W0213 19:50:34.545056 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.545226 kubelet[2542]: E0213 19:50:34.545088 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.545573 kubelet[2542]: E0213 19:50:34.545533 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.545573 kubelet[2542]: W0213 19:50:34.545570 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.545753 kubelet[2542]: E0213 19:50:34.545601 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.546128 kubelet[2542]: E0213 19:50:34.546086 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.546128 kubelet[2542]: W0213 19:50:34.546126 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.546276 kubelet[2542]: E0213 19:50:34.546159 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.546984 kubelet[2542]: E0213 19:50:34.546937 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.546984 kubelet[2542]: W0213 19:50:34.546985 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.547306 kubelet[2542]: E0213 19:50:34.547051 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.553627 kubelet[2542]: E0213 19:50:34.553574 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.553627 kubelet[2542]: W0213 19:50:34.553613 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.553791 kubelet[2542]: E0213 19:50:34.553644 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.554200 kubelet[2542]: E0213 19:50:34.554156 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.554200 kubelet[2542]: W0213 19:50:34.554189 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.554367 kubelet[2542]: E0213 19:50:34.554234 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.554683 kubelet[2542]: E0213 19:50:34.554649 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.554763 kubelet[2542]: W0213 19:50:34.554681 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.554763 kubelet[2542]: E0213 19:50:34.554732 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.555355 kubelet[2542]: E0213 19:50:34.555314 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.555510 kubelet[2542]: W0213 19:50:34.555350 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.555510 kubelet[2542]: E0213 19:50:34.555476 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.555956 kubelet[2542]: E0213 19:50:34.555919 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.555956 kubelet[2542]: W0213 19:50:34.555955 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.556144 kubelet[2542]: E0213 19:50:34.556056 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.556596 kubelet[2542]: E0213 19:50:34.556559 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.556596 kubelet[2542]: W0213 19:50:34.556592 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.556981 kubelet[2542]: E0213 19:50:34.556633 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.557309 kubelet[2542]: E0213 19:50:34.557260 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.557309 kubelet[2542]: W0213 19:50:34.557298 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.557769 kubelet[2542]: E0213 19:50:34.557350 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.557981 kubelet[2542]: E0213 19:50:34.557864 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.557981 kubelet[2542]: W0213 19:50:34.557897 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.557981 kubelet[2542]: E0213 19:50:34.557938 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.558914 kubelet[2542]: E0213 19:50:34.558879 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.558914 kubelet[2542]: W0213 19:50:34.558912 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.559118 kubelet[2542]: E0213 19:50:34.558948 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.559426 kubelet[2542]: E0213 19:50:34.559393 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.559500 kubelet[2542]: W0213 19:50:34.559425 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.559500 kubelet[2542]: E0213 19:50:34.559461 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.560168 kubelet[2542]: E0213 19:50:34.560131 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.560168 kubelet[2542]: W0213 19:50:34.560167 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.560465 kubelet[2542]: E0213 19:50:34.560344 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.560559 kubelet[2542]: E0213 19:50:34.560533 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:50:34.560618 kubelet[2542]: W0213 19:50:34.560559 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:50:34.560618 kubelet[2542]: E0213 19:50:34.560581 2542 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:50:34.680475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647136535.mount: Deactivated successfully. Feb 13 19:50:34.807907 containerd[2050]: time="2025-02-13T19:50:34.807382621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.810186 containerd[2050]: time="2025-02-13T19:50:34.809973913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:50:34.812342 containerd[2050]: time="2025-02-13T19:50:34.811435093Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.817152 containerd[2050]: time="2025-02-13T19:50:34.817072009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.377832939s" Feb 13 19:50:34.817355 containerd[2050]: time="2025-02-13T19:50:34.817322113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:50:34.817589 containerd[2050]: time="2025-02-13T19:50:34.817079485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.822069 containerd[2050]: time="2025-02-13T19:50:34.821973589Z" level=info msg="CreateContainer within sandbox \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:50:34.844830 containerd[2050]: time="2025-02-13T19:50:34.844772209Z" level=info msg="CreateContainer within sandbox \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b2e0ad171347aa32c456419384e39342af6c12d7b098cc29c97a124801cc2cb5\"" Feb 13 19:50:34.847075 containerd[2050]: time="2025-02-13T19:50:34.846419113Z" level=info msg="StartContainer for \"b2e0ad171347aa32c456419384e39342af6c12d7b098cc29c97a124801cc2cb5\"" Feb 13 19:50:34.956829 containerd[2050]: time="2025-02-13T19:50:34.955988978Z" level=info msg="StartContainer for \"b2e0ad171347aa32c456419384e39342af6c12d7b098cc29c97a124801cc2cb5\" returns successfully" Feb 13 19:50:35.273169 containerd[2050]: time="2025-02-13T19:50:35.272893751Z" level=info msg="shim disconnected" id=b2e0ad171347aa32c456419384e39342af6c12d7b098cc29c97a124801cc2cb5 namespace=k8s.io Feb 13 19:50:35.273169 containerd[2050]: time="2025-02-13T19:50:35.272992499Z" level=warning msg="cleaning up after shim disconnected" id=b2e0ad171347aa32c456419384e39342af6c12d7b098cc29c97a124801cc2cb5 namespace=k8s.io Feb 13 19:50:35.273169 containerd[2050]: time="2025-02-13T19:50:35.273043391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:35.281704 kubelet[2542]: E0213 19:50:35.281598 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:35.494862 containerd[2050]: time="2025-02-13T19:50:35.494512885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:50:35.626918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2e0ad171347aa32c456419384e39342af6c12d7b098cc29c97a124801cc2cb5-rootfs.mount: Deactivated successfully. Feb 13 19:50:36.282097 kubelet[2542]: E0213 19:50:36.282038 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:36.428760 kubelet[2542]: E0213 19:50:36.427992 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:37.282492 kubelet[2542]: E0213 19:50:37.282416 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:38.283115 kubelet[2542]: E0213 19:50:38.283004 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:38.428388 kubelet[2542]: E0213 19:50:38.428316 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:39.014847 containerd[2050]: time="2025-02-13T19:50:39.014777930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:39.016842 containerd[2050]: time="2025-02-13T19:50:39.016760594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:50:39.018339 containerd[2050]: time="2025-02-13T19:50:39.018285866Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:39.022790 containerd[2050]: time="2025-02-13T19:50:39.022691942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:39.024757 containerd[2050]: time="2025-02-13T19:50:39.024553082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.529974401s" Feb 13 19:50:39.024757 containerd[2050]: time="2025-02-13T19:50:39.024620306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:50:39.029167 containerd[2050]: time="2025-02-13T19:50:39.029075030Z" level=info msg="CreateContainer within sandbox \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:50:39.050141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621785572.mount: Deactivated successfully. Feb 13 19:50:39.056379 containerd[2050]: time="2025-02-13T19:50:39.056286242Z" level=info msg="CreateContainer within sandbox \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b323b5878850eba8199cd5980528497ebc4af354dff6c640c991143bd92fd052\"" Feb 13 19:50:39.057377 containerd[2050]: time="2025-02-13T19:50:39.057297494Z" level=info msg="StartContainer for \"b323b5878850eba8199cd5980528497ebc4af354dff6c640c991143bd92fd052\"" Feb 13 19:50:39.158715 containerd[2050]: time="2025-02-13T19:50:39.158635011Z" level=info msg="StartContainer for \"b323b5878850eba8199cd5980528497ebc4af354dff6c640c991143bd92fd052\" returns successfully" Feb 13 19:50:39.283945 kubelet[2542]: E0213 19:50:39.283728 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:40.074243 containerd[2050]: time="2025-02-13T19:50:40.074157711Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:50:40.113320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b323b5878850eba8199cd5980528497ebc4af354dff6c640c991143bd92fd052-rootfs.mount: Deactivated successfully. Feb 13 19:50:40.163470 kubelet[2542]: I0213 19:50:40.162221 2542 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:50:40.284108 kubelet[2542]: E0213 19:50:40.283997 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:40.435092 containerd[2050]: time="2025-02-13T19:50:40.434849957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzg8v,Uid:7ac55b41-af13-4faf-9c88-6fe38b62f075,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:40.915080 containerd[2050]: time="2025-02-13T19:50:40.914827003Z" level=error msg="Failed to destroy network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:40.916903 containerd[2050]: time="2025-02-13T19:50:40.916802755Z" level=error msg="encountered an error cleaning up failed sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:40.917104 containerd[2050]: time="2025-02-13T19:50:40.916929163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzg8v,Uid:7ac55b41-af13-4faf-9c88-6fe38b62f075,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:40.918883 kubelet[2542]: E0213 19:50:40.918326 2542 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:40.918883 kubelet[2542]: E0213 19:50:40.918434 2542 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:40.918883 kubelet[2542]: E0213 19:50:40.918474 2542 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzg8v" Feb 13 19:50:40.919197 kubelet[2542]: E0213 19:50:40.918554 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzg8v_calico-system(7ac55b41-af13-4faf-9c88-6fe38b62f075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzg8v_calico-system(7ac55b41-af13-4faf-9c88-6fe38b62f075)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:40.919835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c-shm.mount: Deactivated successfully. Feb 13 19:50:40.984080 containerd[2050]: time="2025-02-13T19:50:40.983917976Z" level=info msg="shim disconnected" id=b323b5878850eba8199cd5980528497ebc4af354dff6c640c991143bd92fd052 namespace=k8s.io Feb 13 19:50:40.984340 containerd[2050]: time="2025-02-13T19:50:40.984079100Z" level=warning msg="cleaning up after shim disconnected" id=b323b5878850eba8199cd5980528497ebc4af354dff6c640c991143bd92fd052 namespace=k8s.io Feb 13 19:50:40.984340 containerd[2050]: time="2025-02-13T19:50:40.984109544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:41.285577 kubelet[2542]: E0213 19:50:41.284571 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:41.354417 kubelet[2542]: I0213 19:50:41.354342 2542 topology_manager.go:215] "Topology Admit Handler" podUID="480ba374-341b-4809-b7c3-f4cc0bc92a8a" podNamespace="default" podName="nginx-deployment-85f456d6dd-2p4lz" Feb 13 19:50:41.404443 kubelet[2542]: I0213 19:50:41.404375 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wsd4\" (UniqueName: \"kubernetes.io/projected/480ba374-341b-4809-b7c3-f4cc0bc92a8a-kube-api-access-2wsd4\") pod \"nginx-deployment-85f456d6dd-2p4lz\" (UID: \"480ba374-341b-4809-b7c3-f4cc0bc92a8a\") " pod="default/nginx-deployment-85f456d6dd-2p4lz" Feb 13 19:50:41.534211 containerd[2050]: time="2025-02-13T19:50:41.534092995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:50:41.544886 kubelet[2542]: I0213 19:50:41.542215 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:50:41.545401 containerd[2050]: time="2025-02-13T19:50:41.544683307Z" level=info msg="StopPodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\"" Feb 13 19:50:41.545401 containerd[2050]: time="2025-02-13T19:50:41.545071279Z" level=info msg="Ensure that sandbox bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c in task-service has been cleanup successfully" Feb 13 19:50:41.599107 containerd[2050]: time="2025-02-13T19:50:41.598929811Z" level=error msg="StopPodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" failed" error="failed to destroy network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:41.599540 kubelet[2542]: E0213 19:50:41.599478 2542 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:50:41.599951 kubelet[2542]: E0213 19:50:41.599764 2542 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c"} Feb 13 19:50:41.599951 kubelet[2542]: E0213 19:50:41.599865 2542 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ac55b41-af13-4faf-9c88-6fe38b62f075\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:41.599951 kubelet[2542]: E0213 19:50:41.599904 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ac55b41-af13-4faf-9c88-6fe38b62f075\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzg8v" podUID="7ac55b41-af13-4faf-9c88-6fe38b62f075" Feb 13 19:50:41.660976 containerd[2050]: time="2025-02-13T19:50:41.660896899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2p4lz,Uid:480ba374-341b-4809-b7c3-f4cc0bc92a8a,Namespace:default,Attempt:0,}" Feb 13 19:50:41.766669 containerd[2050]: time="2025-02-13T19:50:41.766566476Z" level=error msg="Failed to destroy network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:41.769599 containerd[2050]: time="2025-02-13T19:50:41.769512848Z" level=error msg="encountered an error cleaning up failed sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:41.769725 containerd[2050]: time="2025-02-13T19:50:41.769621280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2p4lz,Uid:480ba374-341b-4809-b7c3-f4cc0bc92a8a,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:41.771163 kubelet[2542]: E0213 19:50:41.770557 2542 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:41.771163 kubelet[2542]: E0213 19:50:41.770646 2542 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-2p4lz" Feb 13 19:50:41.771163 kubelet[2542]: E0213 19:50:41.770684 2542 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-2p4lz" Feb 13 19:50:41.771438 kubelet[2542]: E0213 19:50:41.770791 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-2p4lz_default(480ba374-341b-4809-b7c3-f4cc0bc92a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-2p4lz_default(480ba374-341b-4809-b7c3-f4cc0bc92a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-2p4lz" podUID="480ba374-341b-4809-b7c3-f4cc0bc92a8a" Feb 13 19:50:41.771952 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e-shm.mount: Deactivated successfully. Feb 13 19:50:42.285325 kubelet[2542]: E0213 19:50:42.285255 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:42.546637 kubelet[2542]: I0213 19:50:42.546505 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:50:42.548131 containerd[2050]: time="2025-02-13T19:50:42.547826432Z" level=info msg="StopPodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\"" Feb 13 19:50:42.548694 containerd[2050]: time="2025-02-13T19:50:42.548170532Z" level=info msg="Ensure that sandbox 9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e in task-service has been cleanup successfully" Feb 13 19:50:42.603047 containerd[2050]: time="2025-02-13T19:50:42.602956904Z" level=error msg="StopPodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" failed" error="failed to destroy network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:42.603346 kubelet[2542]: E0213 19:50:42.603277 2542 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:50:42.603510 kubelet[2542]: E0213 19:50:42.603353 2542 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e"} Feb 13 19:50:42.603510 kubelet[2542]: E0213 19:50:42.603412 2542 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"480ba374-341b-4809-b7c3-f4cc0bc92a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:50:42.603510 kubelet[2542]: E0213 19:50:42.603452 2542 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"480ba374-341b-4809-b7c3-f4cc0bc92a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-2p4lz" podUID="480ba374-341b-4809-b7c3-f4cc0bc92a8a" Feb 13 19:50:43.285538 kubelet[2542]: E0213 19:50:43.285486 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:44.287096 kubelet[2542]: E0213 19:50:44.287037 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:45.130432 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:50:45.287585 kubelet[2542]: E0213 19:50:45.287421 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:46.288379 kubelet[2542]: E0213 19:50:46.288218 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:47.289479 kubelet[2542]: E0213 19:50:47.289381 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:47.838826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524524178.mount: Deactivated successfully. Feb 13 19:50:47.906505 containerd[2050]: time="2025-02-13T19:50:47.906422258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:47.908108 containerd[2050]: time="2025-02-13T19:50:47.907892546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:50:47.909209 containerd[2050]: time="2025-02-13T19:50:47.909095654Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:47.912968 containerd[2050]: time="2025-02-13T19:50:47.912853706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:47.914715 containerd[2050]: time="2025-02-13T19:50:47.914461694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.380251675s" Feb 13 19:50:47.914715 containerd[2050]: time="2025-02-13T19:50:47.914533994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:50:47.940956 containerd[2050]: time="2025-02-13T19:50:47.940881218Z" level=info msg="CreateContainer within sandbox \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:50:47.961243 containerd[2050]: time="2025-02-13T19:50:47.961011110Z" level=info msg="CreateContainer within sandbox \"7bfcbc08d447886334fd95c8c76b361fadca9fa50d8fef781f90069697d186ab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7012aea8fbc9677b8d0158d738ea8907eb58c851e5b753958ab87b0d3cf7ab64\"" Feb 13 19:50:47.964082 containerd[2050]: time="2025-02-13T19:50:47.963156614Z" level=info msg="StartContainer for \"7012aea8fbc9677b8d0158d738ea8907eb58c851e5b753958ab87b0d3cf7ab64\"" Feb 13 19:50:48.066842 containerd[2050]: time="2025-02-13T19:50:48.066330395Z" level=info msg="StartContainer for \"7012aea8fbc9677b8d0158d738ea8907eb58c851e5b753958ab87b0d3cf7ab64\" returns successfully" Feb 13 19:50:48.178822 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:50:48.179327 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:50:48.275293 kubelet[2542]: E0213 19:50:48.274574 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:48.290210 kubelet[2542]: E0213 19:50:48.290121 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:48.597627 kubelet[2542]: I0213 19:50:48.597326 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hkjt7" podStartSLOduration=4.296567229 podStartE2EDuration="20.597281138s" podCreationTimestamp="2025-02-13 19:50:28 +0000 UTC" firstStartedPulling="2025-02-13 19:50:31.615660021 +0000 UTC m=+5.015507570" lastFinishedPulling="2025-02-13 19:50:47.91637393 +0000 UTC m=+21.316221479" observedRunningTime="2025-02-13 19:50:48.597215714 +0000 UTC m=+21.997063299" watchObservedRunningTime="2025-02-13 19:50:48.597281138 +0000 UTC m=+21.997128711" Feb 13 19:50:49.290964 kubelet[2542]: E0213 19:50:49.290904 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:50.001185 kernel: bpftool[3338]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:50:50.291884 kubelet[2542]: E0213 19:50:50.291724 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:50.316189 (udev-worker)[3167]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:50.318524 systemd-networkd[1601]: vxlan.calico: Link UP Feb 13 19:50:50.318532 systemd-networkd[1601]: vxlan.calico: Gained carrier Feb 13 19:50:50.356793 (udev-worker)[3166]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:51.292182 kubelet[2542]: E0213 19:50:51.292073 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:51.705423 systemd-networkd[1601]: vxlan.calico: Gained IPv6LL Feb 13 19:50:52.293066 kubelet[2542]: E0213 19:50:52.292948 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:53.293752 kubelet[2542]: E0213 19:50:53.293644 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:53.794842 ntpd[2000]: Listen normally on 6 vxlan.calico 192.168.101.128:123 Feb 13 19:50:53.796245 ntpd[2000]: 13 Feb 19:50:53 ntpd[2000]: Listen normally on 6 vxlan.calico 192.168.101.128:123 Feb 13 19:50:53.796245 ntpd[2000]: 13 Feb 19:50:53 ntpd[2000]: Listen normally on 7 vxlan.calico [fe80::644e:6dff:fed2:edbf%3]:123 Feb 13 19:50:53.795688 ntpd[2000]: Listen normally on 7 vxlan.calico [fe80::644e:6dff:fed2:edbf%3]:123 Feb 13 19:50:54.294884 kubelet[2542]: E0213 19:50:54.294801 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:54.431101 containerd[2050]: time="2025-02-13T19:50:54.430556359Z" level=info msg="StopPodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\"" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.513 [INFO][3436] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.513 [INFO][3436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" iface="eth0" netns="/var/run/netns/cni-f5e13c62-a22c-4c0a-c3a2-5c8cae09163d" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.514 [INFO][3436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" iface="eth0" netns="/var/run/netns/cni-f5e13c62-a22c-4c0a-c3a2-5c8cae09163d" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.514 [INFO][3436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" iface="eth0" netns="/var/run/netns/cni-f5e13c62-a22c-4c0a-c3a2-5c8cae09163d" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.514 [INFO][3436] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.514 [INFO][3436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.561 [INFO][3442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.562 [INFO][3442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.562 [INFO][3442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.574 [WARNING][3442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.574 [INFO][3442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.577 [INFO][3442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:54.587243 containerd[2050]: 2025-02-13 19:50:54.581 [INFO][3436] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:50:54.587243 containerd[2050]: time="2025-02-13T19:50:54.584834551Z" level=info msg="TearDown network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" successfully" Feb 13 19:50:54.587243 containerd[2050]: time="2025-02-13T19:50:54.584877607Z" level=info msg="StopPodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" returns successfully" Feb 13 19:50:54.589834 containerd[2050]: time="2025-02-13T19:50:54.589165219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2p4lz,Uid:480ba374-341b-4809-b7c3-f4cc0bc92a8a,Namespace:default,Attempt:1,}" Feb 13 19:50:54.590981 systemd[1]: run-netns-cni\x2df5e13c62\x2da22c\x2d4c0a\x2dc3a2\x2d5c8cae09163d.mount: Deactivated successfully. Feb 13 19:50:54.786085 systemd-networkd[1601]: cali3b6b31d5518: Link UP Feb 13 19:50:54.787761 systemd-networkd[1601]: cali3b6b31d5518: Gained carrier Feb 13 19:50:54.793165 (udev-worker)[3467]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.674 [INFO][3450] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0 nginx-deployment-85f456d6dd- default 480ba374-341b-4809-b7c3-f4cc0bc92a8a 996 0 2025-02-13 19:50:41 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.39 nginx-deployment-85f456d6dd-2p4lz eth0 default [] [] [kns.default ksa.default.default] cali3b6b31d5518 [] []}} ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.675 [INFO][3450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.723 [INFO][3460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" HandleID="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.740 [INFO][3460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" HandleID="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f8990), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.39", "pod":"nginx-deployment-85f456d6dd-2p4lz", "timestamp":"2025-02-13 19:50:54.723391088 +0000 UTC"}, Hostname:"172.31.17.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.740 [INFO][3460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.740 [INFO][3460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.740 [INFO][3460] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.39' Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.742 [INFO][3460] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.748 [INFO][3460] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.755 [INFO][3460] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.757 [INFO][3460] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.760 [INFO][3460] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.760 [INFO][3460] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.763 [INFO][3460] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63 Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.769 [INFO][3460] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.777 [INFO][3460] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.129/26] block=192.168.101.128/26 handle="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.777 [INFO][3460] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.129/26] handle="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" host="172.31.17.39" Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.777 [INFO][3460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:54.806102 containerd[2050]: 2025-02-13 19:50:54.778 [INFO][3460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.129/26] IPv6=[] ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" HandleID="k8s-pod-network.5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.814264 containerd[2050]: 2025-02-13 19:50:54.780 [INFO][3450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"480ba374-341b-4809-b7c3-f4cc0bc92a8a", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-2p4lz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3b6b31d5518", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:54.814264 containerd[2050]: 2025-02-13 19:50:54.781 [INFO][3450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.129/32] ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.814264 containerd[2050]: 2025-02-13 19:50:54.781 [INFO][3450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b6b31d5518 ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.814264 containerd[2050]: 2025-02-13 19:50:54.788 [INFO][3450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.814264 containerd[2050]: 2025-02-13 19:50:54.790 [INFO][3450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"480ba374-341b-4809-b7c3-f4cc0bc92a8a", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63", Pod:"nginx-deployment-85f456d6dd-2p4lz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3b6b31d5518", MAC:"a6:19:58:63:65:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:54.814264 containerd[2050]: 2025-02-13 19:50:54.800 [INFO][3450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63" Namespace="default" Pod="nginx-deployment-85f456d6dd-2p4lz" WorkloadEndpoint="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:50:54.856117 containerd[2050]: time="2025-02-13T19:50:54.855090441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:54.856117 containerd[2050]: time="2025-02-13T19:50:54.855222429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:54.856994 containerd[2050]: time="2025-02-13T19:50:54.855259617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:54.856994 containerd[2050]: time="2025-02-13T19:50:54.855454509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:54.900261 systemd[1]: run-containerd-runc-k8s.io-5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63-runc.6pHL3l.mount: Deactivated successfully. Feb 13 19:50:54.953728 containerd[2050]: time="2025-02-13T19:50:54.953664405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2p4lz,Uid:480ba374-341b-4809-b7c3-f4cc0bc92a8a,Namespace:default,Attempt:1,} returns sandbox id \"5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63\"" Feb 13 19:50:54.956776 containerd[2050]: time="2025-02-13T19:50:54.956670717Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:50:55.297154 kubelet[2542]: E0213 19:50:55.295220 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:55.993682 systemd-networkd[1601]: cali3b6b31d5518: Gained IPv6LL Feb 13 19:50:56.296421 kubelet[2542]: E0213 19:50:56.296239 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:56.433488 containerd[2050]: time="2025-02-13T19:50:56.432963861Z" level=info msg="StopPodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\"" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.557 [INFO][3545] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.559 [INFO][3545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" iface="eth0" netns="/var/run/netns/cni-821bbd19-7ebd-c63e-aad5-0652cf987e52" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.560 [INFO][3545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" iface="eth0" netns="/var/run/netns/cni-821bbd19-7ebd-c63e-aad5-0652cf987e52" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.560 [INFO][3545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" iface="eth0" netns="/var/run/netns/cni-821bbd19-7ebd-c63e-aad5-0652cf987e52" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.561 [INFO][3545] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.561 [INFO][3545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.605 [INFO][3551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.605 [INFO][3551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.605 [INFO][3551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.619 [WARNING][3551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.619 [INFO][3551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.624 [INFO][3551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:56.629787 containerd[2050]: 2025-02-13 19:50:56.627 [INFO][3545] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:50:56.634088 containerd[2050]: time="2025-02-13T19:50:56.631156798Z" level=info msg="TearDown network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" successfully" Feb 13 19:50:56.634088 containerd[2050]: time="2025-02-13T19:50:56.631207906Z" level=info msg="StopPodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" returns successfully" Feb 13 19:50:56.634088 containerd[2050]: time="2025-02-13T19:50:56.632789794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzg8v,Uid:7ac55b41-af13-4faf-9c88-6fe38b62f075,Namespace:calico-system,Attempt:1,}" Feb 13 19:50:56.636299 systemd[1]: run-netns-cni\x2d821bbd19\x2d7ebd\x2dc63e\x2daad5\x2d0652cf987e52.mount: Deactivated successfully. Feb 13 19:50:56.935263 systemd-networkd[1601]: cali0be3999e8d5: Link UP Feb 13 19:50:56.937555 systemd-networkd[1601]: cali0be3999e8d5: Gained carrier Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.775 [INFO][3558] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.39-k8s-csi--node--driver--wzg8v-eth0 csi-node-driver- calico-system 7ac55b41-af13-4faf-9c88-6fe38b62f075 1005 0 2025-02-13 19:50:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.17.39 csi-node-driver-wzg8v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0be3999e8d5 [] []}} ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.775 [INFO][3558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.840 [INFO][3571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" HandleID="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.864 [INFO][3571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" HandleID="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.17.39", "pod":"csi-node-driver-wzg8v", "timestamp":"2025-02-13 19:50:56.840055763 +0000 UTC"}, Hostname:"172.31.17.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.864 [INFO][3571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.865 [INFO][3571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.866 [INFO][3571] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.39' Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.870 [INFO][3571] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.877 [INFO][3571] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.888 [INFO][3571] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.894 [INFO][3571] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.899 [INFO][3571] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.899 [INFO][3571] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.901 [INFO][3571] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428 Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.909 [INFO][3571] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.920 [INFO][3571] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.130/26] block=192.168.101.128/26 handle="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.920 [INFO][3571] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.130/26] handle="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" host="172.31.17.39" Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.921 [INFO][3571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:56.967310 containerd[2050]: 2025-02-13 19:50:56.921 [INFO][3571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.130/26] IPv6=[] ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" HandleID="k8s-pod-network.493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.968371 containerd[2050]: 2025-02-13 19:50:56.927 [INFO][3558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-csi--node--driver--wzg8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ac55b41-af13-4faf-9c88-6fe38b62f075", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"", Pod:"csi-node-driver-wzg8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3999e8d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:56.968371 containerd[2050]: 2025-02-13 19:50:56.928 [INFO][3558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.130/32] ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.968371 containerd[2050]: 2025-02-13 19:50:56.928 [INFO][3558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0be3999e8d5 ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.968371 containerd[2050]: 2025-02-13 19:50:56.939 [INFO][3558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:56.968371 containerd[2050]: 2025-02-13 19:50:56.942 [INFO][3558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-csi--node--driver--wzg8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ac55b41-af13-4faf-9c88-6fe38b62f075", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428", Pod:"csi-node-driver-wzg8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3999e8d5", MAC:"a6:52:d9:29:88:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:56.968371 containerd[2050]: 2025-02-13 19:50:56.957 [INFO][3558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428" Namespace="calico-system" Pod="csi-node-driver-wzg8v" WorkloadEndpoint="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:50:57.041516 containerd[2050]: time="2025-02-13T19:50:57.040564424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:57.041516 containerd[2050]: time="2025-02-13T19:50:57.040663520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:57.041516 containerd[2050]: time="2025-02-13T19:50:57.040699472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:57.041516 containerd[2050]: time="2025-02-13T19:50:57.040873088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:57.136474 containerd[2050]: time="2025-02-13T19:50:57.136221188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzg8v,Uid:7ac55b41-af13-4faf-9c88-6fe38b62f075,Namespace:calico-system,Attempt:1,} returns sandbox id \"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428\"" Feb 13 19:50:57.297571 kubelet[2542]: E0213 19:50:57.297381 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:58.298240 kubelet[2542]: E0213 19:50:58.298141 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:58.465471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880625930.mount: Deactivated successfully. Feb 13 19:50:58.617870 systemd-networkd[1601]: cali0be3999e8d5: Gained IPv6LL Feb 13 19:50:58.771074 update_engine[2023]: I20250213 19:50:58.770285 2023 update_attempter.cc:509] Updating boot flags... Feb 13 19:50:58.864119 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3648) Feb 13 19:50:59.273185 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3648) Feb 13 19:50:59.298755 kubelet[2542]: E0213 19:50:59.298698 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:00.300067 kubelet[2542]: E0213 19:51:00.299420 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:00.381449 containerd[2050]: time="2025-02-13T19:51:00.381387756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:00.382944 containerd[2050]: time="2025-02-13T19:51:00.381969312Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:51:00.383969 containerd[2050]: time="2025-02-13T19:51:00.383864352Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:00.389289 containerd[2050]: time="2025-02-13T19:51:00.389188488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:00.391621 containerd[2050]: time="2025-02-13T19:51:00.391438224Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 5.434708743s" Feb 13 19:51:00.391621 containerd[2050]: time="2025-02-13T19:51:00.391491084Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:51:00.394315 containerd[2050]: time="2025-02-13T19:51:00.393987324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:51:00.395987 containerd[2050]: time="2025-02-13T19:51:00.395935884Z" level=info msg="CreateContainer within sandbox \"5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:51:00.416458 containerd[2050]: time="2025-02-13T19:51:00.416379960Z" level=info msg="CreateContainer within sandbox \"5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4d4b94bd7f9d9111df260d2cf805fb2e73cfc0db289314f77717be8b0c3c1a41\"" Feb 13 19:51:00.418008 containerd[2050]: time="2025-02-13T19:51:00.417770280Z" level=info msg="StartContainer for \"4d4b94bd7f9d9111df260d2cf805fb2e73cfc0db289314f77717be8b0c3c1a41\"" Feb 13 19:51:00.478940 systemd[1]: run-containerd-runc-k8s.io-4d4b94bd7f9d9111df260d2cf805fb2e73cfc0db289314f77717be8b0c3c1a41-runc.2RVF9m.mount: Deactivated successfully. Feb 13 19:51:00.528061 containerd[2050]: time="2025-02-13T19:51:00.527860177Z" level=info msg="StartContainer for \"4d4b94bd7f9d9111df260d2cf805fb2e73cfc0db289314f77717be8b0c3c1a41\" returns successfully" Feb 13 19:51:00.794783 ntpd[2000]: Listen normally on 8 cali3b6b31d5518 [fe80::ecee:eeff:feee:eeee%6]:123 Feb 13 19:51:00.794899 ntpd[2000]: Listen normally on 9 cali0be3999e8d5 [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 19:51:00.795477 ntpd[2000]: 13 Feb 19:51:00 ntpd[2000]: Listen normally on 8 cali3b6b31d5518 [fe80::ecee:eeff:feee:eeee%6]:123 Feb 13 19:51:00.795477 ntpd[2000]: 13 Feb 19:51:00 ntpd[2000]: Listen normally on 9 cali0be3999e8d5 [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 19:51:01.299670 kubelet[2542]: E0213 19:51:01.299605 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:01.870681 containerd[2050]: time="2025-02-13T19:51:01.870600844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:01.872433 containerd[2050]: time="2025-02-13T19:51:01.872368576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:51:01.873720 containerd[2050]: time="2025-02-13T19:51:01.873650176Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:01.877180 containerd[2050]: time="2025-02-13T19:51:01.877130404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:01.878788 containerd[2050]: time="2025-02-13T19:51:01.878575660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.48449704s" Feb 13 19:51:01.878788 containerd[2050]: time="2025-02-13T19:51:01.878622532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:51:01.883207 containerd[2050]: time="2025-02-13T19:51:01.883048288Z" level=info msg="CreateContainer within sandbox \"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:51:01.906457 containerd[2050]: time="2025-02-13T19:51:01.904122520Z" level=info msg="CreateContainer within sandbox \"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2c20b99ebc4f94111deaeef8ab7060adaa66b8bfe5be13cd28b6001fc392e247\"" Feb 13 19:51:01.906457 containerd[2050]: time="2025-02-13T19:51:01.906363952Z" level=info msg="StartContainer for \"2c20b99ebc4f94111deaeef8ab7060adaa66b8bfe5be13cd28b6001fc392e247\"" Feb 13 19:51:02.008823 containerd[2050]: time="2025-02-13T19:51:02.008712060Z" level=info msg="StartContainer for \"2c20b99ebc4f94111deaeef8ab7060adaa66b8bfe5be13cd28b6001fc392e247\" returns successfully" Feb 13 19:51:02.012231 containerd[2050]: time="2025-02-13T19:51:02.012157356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:51:02.300506 kubelet[2542]: E0213 19:51:02.300338 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:03.300728 kubelet[2542]: E0213 19:51:03.300650 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:03.574181 containerd[2050]: time="2025-02-13T19:51:03.573307504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:03.576077 containerd[2050]: time="2025-02-13T19:51:03.575995432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:51:03.578210 containerd[2050]: time="2025-02-13T19:51:03.578154820Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:03.584621 containerd[2050]: time="2025-02-13T19:51:03.584092684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:03.587157 containerd[2050]: time="2025-02-13T19:51:03.587087512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.574707556s" Feb 13 19:51:03.588004 containerd[2050]: time="2025-02-13T19:51:03.587950228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:51:03.595083 containerd[2050]: time="2025-02-13T19:51:03.594754108Z" level=info msg="CreateContainer within sandbox \"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:51:03.618693 containerd[2050]: time="2025-02-13T19:51:03.617927512Z" level=info msg="CreateContainer within sandbox \"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"40442467cbb3b82da3059e1d1dbcc5f52106c45473f2ef8f4fcf03374758bd81\"" Feb 13 19:51:03.633301 containerd[2050]: time="2025-02-13T19:51:03.626352052Z" level=info msg="StartContainer for \"40442467cbb3b82da3059e1d1dbcc5f52106c45473f2ef8f4fcf03374758bd81\"" Feb 13 19:51:03.632535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390636490.mount: Deactivated successfully. Feb 13 19:51:03.673951 kubelet[2542]: I0213 19:51:03.673750 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-2p4lz" podStartSLOduration=17.236406801 podStartE2EDuration="22.673724452s" podCreationTimestamp="2025-02-13 19:50:41 +0000 UTC" firstStartedPulling="2025-02-13 19:50:54.956115525 +0000 UTC m=+28.355963086" lastFinishedPulling="2025-02-13 19:51:00.393433176 +0000 UTC m=+33.793280737" observedRunningTime="2025-02-13 19:51:00.631582153 +0000 UTC m=+34.031429726" watchObservedRunningTime="2025-02-13 19:51:03.673724452 +0000 UTC m=+37.073572013" Feb 13 19:51:03.756281 containerd[2050]: time="2025-02-13T19:51:03.756098981Z" level=info msg="StartContainer for \"40442467cbb3b82da3059e1d1dbcc5f52106c45473f2ef8f4fcf03374758bd81\" returns successfully" Feb 13 19:51:04.301446 kubelet[2542]: E0213 19:51:04.301376 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:04.431713 kubelet[2542]: I0213 19:51:04.431405 2542 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:51:04.431713 kubelet[2542]: I0213 19:51:04.431469 2542 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:51:05.302476 kubelet[2542]: E0213 19:51:05.302404 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:06.303300 kubelet[2542]: E0213 19:51:06.303228 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:07.303905 kubelet[2542]: E0213 19:51:07.303821 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:08.274311 kubelet[2542]: E0213 19:51:08.274242 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:08.305082 kubelet[2542]: E0213 19:51:08.304988 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:08.953679 kubelet[2542]: I0213 19:51:08.953593 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wzg8v" podStartSLOduration=34.503083323 podStartE2EDuration="40.953572979s" podCreationTimestamp="2025-02-13 19:50:28 +0000 UTC" firstStartedPulling="2025-02-13 19:50:57.139932512 +0000 UTC m=+30.539780073" lastFinishedPulling="2025-02-13 19:51:03.590422168 +0000 UTC m=+36.990269729" observedRunningTime="2025-02-13 19:51:04.692142198 +0000 UTC m=+38.091989771" watchObservedRunningTime="2025-02-13 19:51:08.953572979 +0000 UTC m=+42.353420564" Feb 13 19:51:08.953976 kubelet[2542]: I0213 19:51:08.953924 2542 topology_manager.go:215] "Topology Admit Handler" podUID="cee313d9-6547-447e-b2a6-b341228864b4" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:51:08.999841 kubelet[2542]: I0213 19:51:08.999791 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cee313d9-6547-447e-b2a6-b341228864b4-data\") pod \"nfs-server-provisioner-0\" (UID: \"cee313d9-6547-447e-b2a6-b341228864b4\") " pod="default/nfs-server-provisioner-0" Feb 13 19:51:08.999841 kubelet[2542]: I0213 19:51:08.999866 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5pw\" (UniqueName: \"kubernetes.io/projected/cee313d9-6547-447e-b2a6-b341228864b4-kube-api-access-7d5pw\") pod \"nfs-server-provisioner-0\" (UID: \"cee313d9-6547-447e-b2a6-b341228864b4\") " pod="default/nfs-server-provisioner-0" Feb 13 19:51:09.259495 containerd[2050]: time="2025-02-13T19:51:09.259341716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cee313d9-6547-447e-b2a6-b341228864b4,Namespace:default,Attempt:0,}" Feb 13 19:51:09.305743 kubelet[2542]: E0213 19:51:09.305638 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:09.471362 systemd-networkd[1601]: cali60e51b789ff: Link UP Feb 13 19:51:09.473663 systemd-networkd[1601]: cali60e51b789ff: Gained carrier Feb 13 19:51:09.478409 (udev-worker)[4023]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.344 [INFO][4006] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.39-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default cee313d9-6547-447e-b2a6-b341228864b4 1092 0 2025-02-13 19:51:08 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.17.39 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.344 [INFO][4006] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.393 [INFO][4016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" HandleID="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Workload="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.412 [INFO][4016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" HandleID="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Workload="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263290), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.39", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:51:09.393266385 +0000 UTC"}, Hostname:"172.31.17.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.412 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.412 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.412 [INFO][4016] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.39' Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.415 [INFO][4016] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.424 [INFO][4016] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.433 [INFO][4016] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.437 [INFO][4016] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.440 [INFO][4016] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.440 [INFO][4016] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.442 [INFO][4016] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7 Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.451 [INFO][4016] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.464 [INFO][4016] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.131/26] block=192.168.101.128/26 handle="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.464 [INFO][4016] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.131/26] handle="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" host="172.31.17.39" Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.464 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:09.499945 containerd[2050]: 2025-02-13 19:51:09.464 [INFO][4016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.131/26] IPv6=[] ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" HandleID="k8s-pod-network.40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Workload="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.501085 containerd[2050]: 2025-02-13 19:51:09.467 [INFO][4006] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cee313d9-6547-447e-b2a6-b341228864b4", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:09.501085 containerd[2050]: 2025-02-13 19:51:09.467 [INFO][4006] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.131/32] ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.501085 containerd[2050]: 2025-02-13 19:51:09.467 [INFO][4006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.501085 containerd[2050]: 2025-02-13 19:51:09.473 [INFO][4006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.501398 containerd[2050]: 2025-02-13 19:51:09.474 [INFO][4006] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"cee313d9-6547-447e-b2a6-b341228864b4", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6a:64:98:81:51:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:09.501398 containerd[2050]: 2025-02-13 19:51:09.497 [INFO][4006] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.39-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:09.550856 containerd[2050]: time="2025-02-13T19:51:09.549577318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:09.550856 containerd[2050]: time="2025-02-13T19:51:09.549688678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:09.550856 containerd[2050]: time="2025-02-13T19:51:09.549725518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:09.550856 containerd[2050]: time="2025-02-13T19:51:09.550034062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:09.642931 containerd[2050]: time="2025-02-13T19:51:09.642837262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cee313d9-6547-447e-b2a6-b341228864b4,Namespace:default,Attempt:0,} returns sandbox id \"40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7\"" Feb 13 19:51:09.645785 containerd[2050]: time="2025-02-13T19:51:09.645724054Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:51:10.311095 kubelet[2542]: E0213 19:51:10.309769 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:10.906351 systemd-networkd[1601]: cali60e51b789ff: Gained IPv6LL Feb 13 19:51:11.312069 kubelet[2542]: E0213 19:51:11.311905 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:12.216691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604943084.mount: Deactivated successfully. Feb 13 19:51:12.312385 kubelet[2542]: E0213 19:51:12.312313 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:13.313466 kubelet[2542]: E0213 19:51:13.313353 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:13.794952 ntpd[2000]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:51:13.795577 ntpd[2000]: 13 Feb 19:51:13 ntpd[2000]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:51:14.314423 kubelet[2542]: E0213 19:51:14.314353 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:15.003742 containerd[2050]: time="2025-02-13T19:51:15.003662113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:15.005763 containerd[2050]: time="2025-02-13T19:51:15.005693653Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 19:51:15.006694 containerd[2050]: time="2025-02-13T19:51:15.006610573Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:15.011881 containerd[2050]: time="2025-02-13T19:51:15.011799121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:15.014219 containerd[2050]: time="2025-02-13T19:51:15.013991965Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.368204899s" Feb 13 19:51:15.014219 containerd[2050]: time="2025-02-13T19:51:15.014079529Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:51:15.019360 containerd[2050]: time="2025-02-13T19:51:15.019304929Z" level=info msg="CreateContainer within sandbox \"40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:51:15.040761 containerd[2050]: time="2025-02-13T19:51:15.040685401Z" level=info msg="CreateContainer within sandbox \"40856edfe173617ff8ca1e8e20fffce220325755660f2dbca48fba0152c5e1a7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"45676104d6209b9565d038a66aae51eb9d5397b395ae216b2af8beb9e3dd10e6\"" Feb 13 19:51:15.041962 containerd[2050]: time="2025-02-13T19:51:15.041912545Z" level=info msg="StartContainer for \"45676104d6209b9565d038a66aae51eb9d5397b395ae216b2af8beb9e3dd10e6\"" Feb 13 19:51:15.097405 systemd[1]: run-containerd-runc-k8s.io-45676104d6209b9565d038a66aae51eb9d5397b395ae216b2af8beb9e3dd10e6-runc.HhJkbI.mount: Deactivated successfully. Feb 13 19:51:15.152735 containerd[2050]: time="2025-02-13T19:51:15.152574241Z" level=info msg="StartContainer for \"45676104d6209b9565d038a66aae51eb9d5397b395ae216b2af8beb9e3dd10e6\" returns successfully" Feb 13 19:51:15.315753 kubelet[2542]: E0213 19:51:15.315079 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:15.714529 kubelet[2542]: I0213 19:51:15.714419 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.343223169 podStartE2EDuration="7.714377584s" podCreationTimestamp="2025-02-13 19:51:08 +0000 UTC" firstStartedPulling="2025-02-13 19:51:09.64484833 +0000 UTC m=+43.044695891" lastFinishedPulling="2025-02-13 19:51:15.016002745 +0000 UTC m=+48.415850306" observedRunningTime="2025-02-13 19:51:15.714099556 +0000 UTC m=+49.113947117" watchObservedRunningTime="2025-02-13 19:51:15.714377584 +0000 UTC m=+49.114225169" Feb 13 19:51:16.316146 kubelet[2542]: E0213 19:51:16.316066 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:17.317058 kubelet[2542]: E0213 19:51:17.316984 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:18.317342 kubelet[2542]: E0213 19:51:18.317276 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:19.317490 kubelet[2542]: E0213 19:51:19.317414 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:20.318631 kubelet[2542]: E0213 19:51:20.318567 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:21.319452 kubelet[2542]: E0213 19:51:21.319398 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:22.320675 kubelet[2542]: E0213 19:51:22.320604 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:23.321384 kubelet[2542]: E0213 19:51:23.321310 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:24.322182 kubelet[2542]: E0213 19:51:24.322124 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:25.323123 kubelet[2542]: E0213 19:51:25.323055 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:26.324043 kubelet[2542]: E0213 19:51:26.323978 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:27.324928 kubelet[2542]: E0213 19:51:27.324867 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:28.274409 kubelet[2542]: E0213 19:51:28.274350 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:28.315754 containerd[2050]: time="2025-02-13T19:51:28.315684015Z" level=info msg="StopPodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\"" Feb 13 19:51:28.325706 kubelet[2542]: E0213 19:51:28.325645 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.378 [WARNING][4202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-csi--node--driver--wzg8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ac55b41-af13-4faf-9c88-6fe38b62f075", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428", Pod:"csi-node-driver-wzg8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3999e8d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.378 [INFO][4202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.378 [INFO][4202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" iface="eth0" netns="" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.378 [INFO][4202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.378 [INFO][4202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.416 [INFO][4209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.416 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.416 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.430 [WARNING][4209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.430 [INFO][4209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.433 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:28.443289 containerd[2050]: 2025-02-13 19:51:28.439 [INFO][4202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.443289 containerd[2050]: time="2025-02-13T19:51:28.442775752Z" level=info msg="TearDown network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" successfully" Feb 13 19:51:28.443289 containerd[2050]: time="2025-02-13T19:51:28.442814860Z" level=info msg="StopPodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" returns successfully" Feb 13 19:51:28.444292 containerd[2050]: time="2025-02-13T19:51:28.443972932Z" level=info msg="RemovePodSandbox for \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\"" Feb 13 19:51:28.444292 containerd[2050]: time="2025-02-13T19:51:28.444095404Z" level=info msg="Forcibly stopping sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\"" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.513 [WARNING][4229] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-csi--node--driver--wzg8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ac55b41-af13-4faf-9c88-6fe38b62f075", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"493059200bcd64b8bcffe41c9da4844096968f32f1d8dd89a648276510688428", Pod:"csi-node-driver-wzg8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3999e8d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.514 [INFO][4229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.514 [INFO][4229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" iface="eth0" netns="" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.514 [INFO][4229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.514 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.552 [INFO][4236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.553 [INFO][4236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.553 [INFO][4236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.569 [WARNING][4236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.569 [INFO][4236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" HandleID="k8s-pod-network.bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Workload="172.31.17.39-k8s-csi--node--driver--wzg8v-eth0" Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.571 [INFO][4236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:28.578172 containerd[2050]: 2025-02-13 19:51:28.573 [INFO][4229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c" Feb 13 19:51:28.578172 containerd[2050]: time="2025-02-13T19:51:28.576375304Z" level=info msg="TearDown network for sandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" successfully" Feb 13 19:51:28.582545 containerd[2050]: time="2025-02-13T19:51:28.581939956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:28.582545 containerd[2050]: time="2025-02-13T19:51:28.582066508Z" level=info msg="RemovePodSandbox \"bc75c4ac85217f870e7347574f30edd43c071d92eba05ea336ca477e96d2443c\" returns successfully" Feb 13 19:51:28.583482 containerd[2050]: time="2025-02-13T19:51:28.582871996Z" level=info msg="StopPodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\"" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.663 [WARNING][4254] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"480ba374-341b-4809-b7c3-f4cc0bc92a8a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63", Pod:"nginx-deployment-85f456d6dd-2p4lz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3b6b31d5518", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.664 [INFO][4254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.664 [INFO][4254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" iface="eth0" netns="" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.664 [INFO][4254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.664 [INFO][4254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.702 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.702 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.702 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.714 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.714 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.716 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:28.720856 containerd[2050]: 2025-02-13 19:51:28.718 [INFO][4254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.722444 containerd[2050]: time="2025-02-13T19:51:28.721220585Z" level=info msg="TearDown network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" successfully" Feb 13 19:51:28.722444 containerd[2050]: time="2025-02-13T19:51:28.721260665Z" level=info msg="StopPodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" returns successfully" Feb 13 19:51:28.722444 containerd[2050]: time="2025-02-13T19:51:28.721854713Z" level=info msg="RemovePodSandbox for \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\"" Feb 13 19:51:28.722444 containerd[2050]: time="2025-02-13T19:51:28.721900013Z" level=info msg="Forcibly stopping sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\"" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.787 [WARNING][4279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"480ba374-341b-4809-b7c3-f4cc0bc92a8a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"5654db2daa6bec155a10eac80e3f81acf1c69328deb33cd5cd636f173349de63", Pod:"nginx-deployment-85f456d6dd-2p4lz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3b6b31d5518", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.788 [INFO][4279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.788 [INFO][4279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" iface="eth0" netns="" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.788 [INFO][4279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.788 [INFO][4279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.829 [INFO][4285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.829 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.829 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.841 [WARNING][4285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.841 [INFO][4285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" HandleID="k8s-pod-network.9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Workload="172.31.17.39-k8s-nginx--deployment--85f456d6dd--2p4lz-eth0" Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.843 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:28.848209 containerd[2050]: 2025-02-13 19:51:28.845 [INFO][4279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e" Feb 13 19:51:28.848209 containerd[2050]: time="2025-02-13T19:51:28.848194746Z" level=info msg="TearDown network for sandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" successfully" Feb 13 19:51:28.851610 containerd[2050]: time="2025-02-13T19:51:28.851534370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:28.851610 containerd[2050]: time="2025-02-13T19:51:28.851632962Z" level=info msg="RemovePodSandbox \"9f4703b0f7df664c08eb5093462ee3e7afe5c3fd39ed0ed129dcb351274dad1e\" returns successfully" Feb 13 19:51:29.326456 kubelet[2542]: E0213 19:51:29.326286 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:30.326948 kubelet[2542]: E0213 19:51:30.326890 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:31.327858 kubelet[2542]: E0213 19:51:31.327795 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:32.328743 kubelet[2542]: E0213 19:51:32.328667 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:33.329501 kubelet[2542]: E0213 19:51:33.329427 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:33.520778 systemd[1]: run-containerd-runc-k8s.io-7012aea8fbc9677b8d0158d738ea8907eb58c851e5b753958ab87b0d3cf7ab64-runc.Ulw2A8.mount: Deactivated successfully. Feb 13 19:51:34.329983 kubelet[2542]: E0213 19:51:34.329925 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:35.330785 kubelet[2542]: E0213 19:51:35.330705 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:36.331298 kubelet[2542]: E0213 19:51:36.331214 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:37.331876 kubelet[2542]: E0213 19:51:37.331802 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:38.332568 kubelet[2542]: E0213 19:51:38.332487 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:39.333360 kubelet[2542]: E0213 19:51:39.333227 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:40.012951 kubelet[2542]: I0213 19:51:40.012881 2542 topology_manager.go:215] "Topology Admit Handler" podUID="c5cfbfd3-2547-458f-8f52-70f09b9d07fb" podNamespace="default" podName="test-pod-1" Feb 13 19:51:40.091677 kubelet[2542]: I0213 19:51:40.091604 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4663c4c6-bcd4-42bd-9780-dd07b4080aa4\" (UniqueName: \"kubernetes.io/nfs/c5cfbfd3-2547-458f-8f52-70f09b9d07fb-pvc-4663c4c6-bcd4-42bd-9780-dd07b4080aa4\") pod \"test-pod-1\" (UID: \"c5cfbfd3-2547-458f-8f52-70f09b9d07fb\") " pod="default/test-pod-1" Feb 13 19:51:40.091825 kubelet[2542]: I0213 19:51:40.091689 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zx54\" (UniqueName: \"kubernetes.io/projected/c5cfbfd3-2547-458f-8f52-70f09b9d07fb-kube-api-access-9zx54\") pod \"test-pod-1\" (UID: \"c5cfbfd3-2547-458f-8f52-70f09b9d07fb\") " pod="default/test-pod-1" Feb 13 19:51:40.231062 kernel: FS-Cache: Loaded Feb 13 19:51:40.275983 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:51:40.276139 kernel: RPC: Registered udp transport module. Feb 13 19:51:40.276185 kernel: RPC: Registered tcp transport module. Feb 13 19:51:40.279838 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:51:40.280278 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:51:40.334413 kubelet[2542]: E0213 19:51:40.334343 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:40.622131 kernel: NFS: Registering the id_resolver key type Feb 13 19:51:40.622268 kernel: Key type id_resolver registered Feb 13 19:51:40.623169 kernel: Key type id_legacy registered Feb 13 19:51:40.661259 nfsidmap[4339]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:51:40.668495 nfsidmap[4340]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:51:40.919575 containerd[2050]: time="2025-02-13T19:51:40.919293437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c5cfbfd3-2547-458f-8f52-70f09b9d07fb,Namespace:default,Attempt:0,}" Feb 13 19:51:41.153407 systemd-networkd[1601]: cali5ec59c6bf6e: Link UP Feb 13 19:51:41.154991 systemd-networkd[1601]: cali5ec59c6bf6e: Gained carrier Feb 13 19:51:41.156322 (udev-worker)[4327]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.017 [INFO][4342] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.39-k8s-test--pod--1-eth0 default c5cfbfd3-2547-458f-8f52-70f09b9d07fb 1195 0 2025-02-13 19:51:09 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.39 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.017 [INFO][4342] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.068 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" HandleID="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Workload="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.091 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" HandleID="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Workload="172.31.17.39-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002627c0), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.39", "pod":"test-pod-1", "timestamp":"2025-02-13 19:51:41.068550446 +0000 UTC"}, Hostname:"172.31.17.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.091 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.091 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.091 [INFO][4352] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.39' Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.094 [INFO][4352] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.102 [INFO][4352] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.109 [INFO][4352] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.112 [INFO][4352] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.116 [INFO][4352] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.116 [INFO][4352] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.118 [INFO][4352] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.127 [INFO][4352] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.143 [INFO][4352] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.132/26] block=192.168.101.128/26 handle="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.143 [INFO][4352] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.132/26] handle="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" host="172.31.17.39" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.143 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.143 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.132/26] IPv6=[] ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" HandleID="k8s-pod-network.a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Workload="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.179140 containerd[2050]: 2025-02-13 19:51:41.146 [INFO][4342] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c5cfbfd3-2547-458f-8f52-70f09b9d07fb", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:41.184566 containerd[2050]: 2025-02-13 19:51:41.146 [INFO][4342] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.132/32] ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.184566 containerd[2050]: 2025-02-13 19:51:41.146 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.184566 containerd[2050]: 2025-02-13 19:51:41.157 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.184566 containerd[2050]: 2025-02-13 19:51:41.157 [INFO][4342] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.39-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c5cfbfd3-2547-458f-8f52-70f09b9d07fb", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.39", ContainerID:"a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:9b:fb:65:b7:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:41.184566 containerd[2050]: 2025-02-13 19:51:41.171 [INFO][4342] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.39-k8s-test--pod--1-eth0" Feb 13 19:51:41.227154 containerd[2050]: time="2025-02-13T19:51:41.226729851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:41.227154 containerd[2050]: time="2025-02-13T19:51:41.226839327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:41.227154 containerd[2050]: time="2025-02-13T19:51:41.226877187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:41.228046 containerd[2050]: time="2025-02-13T19:51:41.227868291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:41.326910 containerd[2050]: time="2025-02-13T19:51:41.326810584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c5cfbfd3-2547-458f-8f52-70f09b9d07fb,Namespace:default,Attempt:0,} returns sandbox id \"a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da\"" Feb 13 19:51:41.331467 containerd[2050]: time="2025-02-13T19:51:41.331375960Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:51:41.335469 kubelet[2542]: E0213 19:51:41.335394 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:41.690570 containerd[2050]: time="2025-02-13T19:51:41.690472685Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:41.694271 containerd[2050]: time="2025-02-13T19:51:41.694201217Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:51:41.700339 containerd[2050]: time="2025-02-13T19:51:41.700262117Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 368.795209ms" Feb 13 19:51:41.700339 containerd[2050]: time="2025-02-13T19:51:41.700337105Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:51:41.704373 containerd[2050]: time="2025-02-13T19:51:41.704317229Z" level=info msg="CreateContainer within sandbox \"a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:51:41.736979 containerd[2050]: time="2025-02-13T19:51:41.736835226Z" level=info msg="CreateContainer within sandbox \"a590e5539b4205a46b4bf19c0effba2b1c323b6569f724b141e3a7d5dd6011da\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3d990b19e3d96e1c098277c7e5b4b03c98fa253e2bf2011ae633459b0c4c2d05\"" Feb 13 19:51:41.738070 containerd[2050]: time="2025-02-13T19:51:41.737809482Z" level=info msg="StartContainer for \"3d990b19e3d96e1c098277c7e5b4b03c98fa253e2bf2011ae633459b0c4c2d05\"" Feb 13 19:51:41.836182 containerd[2050]: time="2025-02-13T19:51:41.836108382Z" level=info msg="StartContainer for \"3d990b19e3d96e1c098277c7e5b4b03c98fa253e2bf2011ae633459b0c4c2d05\" returns successfully" Feb 13 19:51:42.336677 kubelet[2542]: E0213 19:51:42.336597 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:42.790650 kubelet[2542]: I0213 19:51:42.790118 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.418350642 podStartE2EDuration="33.790092379s" podCreationTimestamp="2025-02-13 19:51:09 +0000 UTC" firstStartedPulling="2025-02-13 19:51:41.330154372 +0000 UTC m=+74.730001933" lastFinishedPulling="2025-02-13 19:51:41.701896121 +0000 UTC m=+75.101743670" observedRunningTime="2025-02-13 19:51:42.789909883 +0000 UTC m=+76.189757456" watchObservedRunningTime="2025-02-13 19:51:42.790092379 +0000 UTC m=+76.189939952" Feb 13 19:51:42.841398 systemd-networkd[1601]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:51:43.337181 kubelet[2542]: E0213 19:51:43.337107 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:44.337788 kubelet[2542]: E0213 19:51:44.337712 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:45.338288 kubelet[2542]: E0213 19:51:45.338211 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:45.794907 ntpd[2000]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:51:45.797720 ntpd[2000]: 13 Feb 19:51:45 ntpd[2000]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:51:46.338982 kubelet[2542]: E0213 19:51:46.338889 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:47.339583 kubelet[2542]: E0213 19:51:47.339516 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:48.274288 kubelet[2542]: E0213 19:51:48.274219 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:48.340136 kubelet[2542]: E0213 19:51:48.340085 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:49.341240 kubelet[2542]: E0213 19:51:49.341165 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:50.342486 kubelet[2542]: E0213 19:51:50.342403 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:51.343174 kubelet[2542]: E0213 19:51:51.343103 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:52.343871 kubelet[2542]: E0213 19:51:52.343797 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:53.344201 kubelet[2542]: E0213 19:51:53.344131 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:54.344458 kubelet[2542]: E0213 19:51:54.344399 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:55.345480 kubelet[2542]: E0213 19:51:55.345410 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:56.346453 kubelet[2542]: E0213 19:51:56.346389 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:57.346593 kubelet[2542]: E0213 19:51:57.346533 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:58.347145 kubelet[2542]: E0213 19:51:58.347090 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:59.347533 kubelet[2542]: E0213 19:51:59.347468 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:59.975823 kubelet[2542]: E0213 19:51:59.975745 2542 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:00.348719 kubelet[2542]: E0213 19:52:00.348642 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:01.349099 kubelet[2542]: E0213 19:52:01.349003 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:02.349447 kubelet[2542]: E0213 19:52:02.349389 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:03.349561 kubelet[2542]: E0213 19:52:03.349497 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:04.349785 kubelet[2542]: E0213 19:52:04.349729 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:05.350299 kubelet[2542]: E0213 19:52:05.350239 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:06.351264 kubelet[2542]: E0213 19:52:06.351203 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:07.352029 kubelet[2542]: E0213 19:52:07.351951 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:08.273965 kubelet[2542]: E0213 19:52:08.273903 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:08.352594 kubelet[2542]: E0213 19:52:08.352543 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:09.353489 kubelet[2542]: E0213 19:52:09.353424 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:09.976839 kubelet[2542]: E0213 19:52:09.976721 2542 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:10.354617 kubelet[2542]: E0213 19:52:10.354547 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:11.355573 kubelet[2542]: E0213 19:52:11.355510 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:12.356808 kubelet[2542]: E0213 19:52:12.356700 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:13.357398 kubelet[2542]: E0213 19:52:13.357323 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:14.358153 kubelet[2542]: E0213 19:52:14.358067 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:15.358630 kubelet[2542]: E0213 19:52:15.358559 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:16.358794 kubelet[2542]: E0213 19:52:16.358710 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:17.359658 kubelet[2542]: E0213 19:52:17.359587 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:18.359774 kubelet[2542]: E0213 19:52:18.359711 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:19.360247 kubelet[2542]: E0213 19:52:19.360186 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:19.977335 kubelet[2542]: E0213 19:52:19.977253 2542 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:20.361079 kubelet[2542]: E0213 19:52:20.360988 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:21.361592 kubelet[2542]: E0213 19:52:21.361526 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:22.362336 kubelet[2542]: E0213 19:52:22.362274 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:23.362640 kubelet[2542]: E0213 19:52:23.362574 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:24.363371 kubelet[2542]: E0213 19:52:24.363300 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:25.364129 kubelet[2542]: E0213 19:52:25.364059 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:26.365210 kubelet[2542]: E0213 19:52:26.365147 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:27.366194 kubelet[2542]: E0213 19:52:27.366131 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:28.071284 kubelet[2542]: E0213 19:52:28.071207 2542 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": unexpected EOF" Feb 13 19:52:28.090046 kubelet[2542]: E0213 19:52:28.088960 2542 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": read tcp 172.31.17.39:53788->172.31.22.232:6443: read: connection reset by peer" Feb 13 19:52:28.090046 kubelet[2542]: I0213 19:52:28.089038 2542 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 19:52:28.090046 kubelet[2542]: E0213 19:52:28.089619 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="200ms" Feb 13 19:52:28.274008 kubelet[2542]: E0213 19:52:28.273939 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:28.291317 kubelet[2542]: E0213 19:52:28.291252 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="400ms" Feb 13 19:52:28.367106 kubelet[2542]: E0213 19:52:28.367034 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:28.692658 kubelet[2542]: E0213 19:52:28.692486 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="800ms" Feb 13 19:52:29.367958 kubelet[2542]: E0213 19:52:29.367883 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:29.494286 kubelet[2542]: E0213 19:52:29.494210 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="1.6s" Feb 13 19:52:30.305590 kubelet[2542]: E0213 19:52:30.305530 2542 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.39\": Get \"https://172.31.22.232:6443/api/v1/nodes/172.31.17.39?resourceVersion=0&timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" Feb 13 19:52:30.307490 kubelet[2542]: E0213 19:52:30.307428 2542 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.39\": Get \"https://172.31.22.232:6443/api/v1/nodes/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" Feb 13 19:52:30.308050 kubelet[2542]: E0213 19:52:30.307989 2542 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.39\": Get \"https://172.31.22.232:6443/api/v1/nodes/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" Feb 13 19:52:30.310307 kubelet[2542]: E0213 19:52:30.310251 2542 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.39\": Get \"https://172.31.22.232:6443/api/v1/nodes/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" Feb 13 19:52:30.310865 kubelet[2542]: E0213 19:52:30.310804 2542 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.39\": Get \"https://172.31.22.232:6443/api/v1/nodes/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" Feb 13 19:52:30.310865 kubelet[2542]: E0213 19:52:30.310848 2542 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Feb 13 19:52:30.368728 kubelet[2542]: E0213 19:52:30.368667 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:31.095624 kubelet[2542]: E0213 19:52:31.095491 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="3.2s" Feb 13 19:52:31.370436 kubelet[2542]: E0213 19:52:31.370286 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:32.370675 kubelet[2542]: E0213 19:52:32.370609 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:33.371399 kubelet[2542]: E0213 19:52:33.371329 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:34.372190 kubelet[2542]: E0213 19:52:34.372114 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:35.372351 kubelet[2542]: E0213 19:52:35.372289 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:36.373445 kubelet[2542]: E0213 19:52:36.373384 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:37.373612 kubelet[2542]: E0213 19:52:37.373546 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:38.374782 kubelet[2542]: E0213 19:52:38.374708 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:39.375100 kubelet[2542]: E0213 19:52:39.375005 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:40.375707 kubelet[2542]: E0213 19:52:40.375635 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:41.376877 kubelet[2542]: E0213 19:52:41.376800 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:42.377250 kubelet[2542]: E0213 19:52:42.377178 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:43.377698 kubelet[2542]: E0213 19:52:43.377631 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:44.297336 kubelet[2542]: E0213 19:52:44.297258 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.39?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 13 19:52:44.378561 kubelet[2542]: E0213 19:52:44.378506 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:45.378796 kubelet[2542]: E0213 19:52:45.378718 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:46.379264 kubelet[2542]: E0213 19:52:46.379196 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:47.379416 kubelet[2542]: E0213 19:52:47.379329 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:48.274821 kubelet[2542]: E0213 19:52:48.274763 2542 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:48.379727 kubelet[2542]: E0213 19:52:48.379675 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:49.380742 kubelet[2542]: E0213 19:52:49.380678 2542 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"