Jul 12 00:07:34.216451 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:07:34.216595 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:07:34.216622 kernel: KASLR disabled due to lack of seed Jul 12 00:07:34.216639 kernel: efi: EFI v2.7 by EDK II Jul 12 00:07:34.216655 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Jul 12 00:07:34.216671 kernel: ACPI: Early table checksum verification disabled Jul 12 00:07:34.216689 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:07:34.216704 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:07:34.216720 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:07:34.216736 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:07:34.216756 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:07:34.216772 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:07:34.216788 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:07:34.216803 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:07:34.216822 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:07:34.216843 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:07:34.216860 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:07:34.216877 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:07:34.216893 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:07:34.216910 kernel: printk: bootconsole [uart0] enabled Jul 12 00:07:34.216927 kernel: NUMA: Failed to initialise from firmware Jul 12 00:07:34.216943 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:07:34.216960 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 12 00:07:34.216976 kernel: Zone ranges: Jul 12 00:07:34.216992 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:07:34.217009 kernel: DMA32 empty Jul 12 00:07:34.217030 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:07:34.217047 kernel: Movable zone start for each node Jul 12 00:07:34.217063 kernel: Early memory node ranges Jul 12 00:07:34.217079 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:07:34.217096 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:07:34.217112 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:07:34.217129 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:07:34.217145 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:07:34.217162 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:07:34.217178 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:07:34.217194 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:07:34.217211 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:07:34.217231 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:07:34.217248 kernel: psci: probing for conduit method from ACPI. Jul 12 00:07:34.217273 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:07:34.217291 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:07:34.217308 kernel: psci: Trusted OS migration not required Jul 12 00:07:34.217331 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:07:34.217350 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:07:34.217367 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:07:34.217385 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:07:34.217402 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:07:34.217420 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:07:34.217437 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:07:34.217455 kernel: CPU features: detected: Spectre-v2 Jul 12 00:07:34.217778 kernel: CPU features: detected: Spectre-v3a Jul 12 00:07:34.217804 kernel: CPU features: detected: Spectre-BHB Jul 12 00:07:34.217822 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:07:34.217849 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:07:34.217867 kernel: alternatives: applying boot alternatives Jul 12 00:07:34.217887 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:34.217907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:07:34.217924 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:07:34.217942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:07:34.217959 kernel: Fallback order for Node 0: 0 Jul 12 00:07:34.217977 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:07:34.217994 kernel: Policy zone: Normal Jul 12 00:07:34.218011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:07:34.218028 kernel: software IO TLB: area num 2. Jul 12 00:07:34.218050 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:07:34.218069 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Jul 12 00:07:34.218086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:07:34.218104 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:07:34.218122 kernel: rcu: RCU event tracing is enabled. Jul 12 00:07:34.218140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:07:34.218158 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:07:34.218175 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:07:34.218193 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:07:34.218210 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:07:34.218228 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:07:34.218249 kernel: GICv3: 96 SPIs implemented Jul 12 00:07:34.218267 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:07:34.218285 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:07:34.218302 kernel: GICv3: GICv3 features: 16 PPIs Jul 12 00:07:34.218319 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:07:34.218336 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:07:34.218353 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:07:34.218371 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:07:34.218389 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 12 00:07:34.218406 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:07:34.218424 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:07:34.218441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:07:34.218480 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:07:34.218505 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:07:34.218524 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:07:34.218541 kernel: Console: colour dummy device 80x25 Jul 12 00:07:34.218560 kernel: printk: console [tty1] enabled Jul 12 00:07:34.218578 kernel: ACPI: Core revision 20230628 Jul 12 00:07:34.218596 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:07:34.218614 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:07:34.218632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:07:34.218681 kernel: landlock: Up and running. Jul 12 00:07:34.218701 kernel: SELinux: Initializing. Jul 12 00:07:34.218719 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:34.218737 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:07:34.218755 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:07:34.218773 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:07:34.218791 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:07:34.218809 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:07:34.218827 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:07:34.218866 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:07:34.218887 kernel: Remapping and enabling EFI services. Jul 12 00:07:34.218905 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:07:34.218922 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:07:34.218940 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:07:34.218958 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 12 00:07:34.218976 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:07:34.218994 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:07:34.219011 kernel: SMP: Total of 2 processors activated. Jul 12 00:07:34.219029 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:07:34.219053 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:07:34.219071 kernel: CPU features: detected: CRC32 instructions Jul 12 00:07:34.219100 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:07:34.219123 kernel: alternatives: applying system-wide alternatives Jul 12 00:07:34.219141 kernel: devtmpfs: initialized Jul 12 00:07:34.219160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:07:34.219178 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:07:34.219197 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:07:34.219216 kernel: SMBIOS 3.0.0 present. Jul 12 00:07:34.219239 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:07:34.219257 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:07:34.219276 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:07:34.219295 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:07:34.219314 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:07:34.219333 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:07:34.219351 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jul 12 00:07:34.219374 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:07:34.219393 kernel: cpuidle: using governor menu Jul 12 00:07:34.219411 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:07:34.219430 kernel: ASID allocator initialised with 65536 entries Jul 12 00:07:34.219448 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:07:34.219482 kernel: Serial: AMBA PL011 UART driver Jul 12 00:07:34.219528 kernel: Modules: 17488 pages in range for non-PLT usage Jul 12 00:07:34.219550 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:07:34.219569 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:07:34.219593 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:07:34.219613 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:07:34.219631 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:07:34.219650 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:07:34.219668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:07:34.219687 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:07:34.219706 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:07:34.219724 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:07:34.219742 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:07:34.219765 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:07:34.219784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:07:34.219802 kernel: ACPI: Interpreter enabled Jul 12 00:07:34.219821 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:07:34.219839 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:07:34.219857 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:07:34.220164 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:07:34.220405 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:07:34.220640 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:07:34.220837 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:07:34.221032 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:07:34.221058 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:07:34.221078 kernel: acpiphp: Slot [1] registered Jul 12 00:07:34.221096 kernel: acpiphp: Slot [2] registered Jul 12 00:07:34.221115 kernel: acpiphp: Slot [3] registered Jul 12 00:07:34.221133 kernel: acpiphp: Slot [4] registered Jul 12 00:07:34.221179 kernel: acpiphp: Slot [5] registered Jul 12 00:07:34.221204 kernel: acpiphp: Slot [6] registered Jul 12 00:07:34.221223 kernel: acpiphp: Slot [7] registered Jul 12 00:07:34.221241 kernel: acpiphp: Slot [8] registered Jul 12 00:07:34.221259 kernel: acpiphp: Slot [9] registered Jul 12 00:07:34.221278 kernel: acpiphp: Slot [10] registered Jul 12 00:07:34.221296 kernel: acpiphp: Slot [11] registered Jul 12 00:07:34.221314 kernel: acpiphp: Slot [12] registered Jul 12 00:07:34.221332 kernel: acpiphp: Slot [13] registered Jul 12 00:07:34.222551 kernel: acpiphp: Slot [14] registered Jul 12 00:07:34.222582 kernel: acpiphp: Slot [15] registered Jul 12 00:07:34.222601 kernel: acpiphp: Slot [16] registered Jul 12 00:07:34.222619 kernel: acpiphp: Slot [17] registered Jul 12 00:07:34.222637 kernel: acpiphp: Slot [18] registered Jul 12 00:07:34.222656 kernel: acpiphp: Slot [19] registered Jul 12 00:07:34.222674 kernel: acpiphp: Slot [20] registered Jul 12 00:07:34.222692 kernel: acpiphp: Slot [21] registered Jul 12 00:07:34.222710 kernel: acpiphp: Slot [22] registered Jul 12 00:07:34.222729 kernel: acpiphp: Slot [23] registered Jul 12 00:07:34.222751 kernel: acpiphp: Slot [24] registered Jul 12 00:07:34.222770 kernel: acpiphp: Slot [25] registered Jul 12 00:07:34.222789 kernel: acpiphp: Slot [26] registered Jul 12 00:07:34.222807 kernel: acpiphp: Slot [27] registered Jul 12 00:07:34.222825 kernel: acpiphp: Slot [28] registered Jul 12 00:07:34.222859 kernel: acpiphp: Slot [29] registered Jul 12 00:07:34.222883 kernel: acpiphp: Slot [30] registered Jul 12 00:07:34.222902 kernel: acpiphp: Slot [31] registered Jul 12 00:07:34.222921 kernel: PCI host bridge to bus 0000:00 Jul 12 00:07:34.223168 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:07:34.223366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:07:34.224646 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:07:34.224861 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:07:34.225122 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:07:34.225386 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:07:34.227899 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:07:34.228163 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:07:34.228371 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:07:34.228689 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:07:34.228926 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:07:34.229128 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:07:34.229328 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:07:34.232297 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:07:34.232543 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:07:34.232750 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:07:34.232954 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:07:34.233169 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:07:34.233384 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:07:34.233634 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:07:34.233836 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:07:34.234036 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:07:34.234226 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:07:34.234252 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:07:34.234272 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:07:34.234291 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:07:34.234311 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:07:34.234330 kernel: iommu: Default domain type: Translated Jul 12 00:07:34.234349 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:07:34.234374 kernel: efivars: Registered efivars operations Jul 12 00:07:34.234395 kernel: vgaarb: loaded Jul 12 00:07:34.234415 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:07:34.234434 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:07:34.234454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:07:34.235542 kernel: pnp: PnP ACPI init Jul 12 00:07:34.235834 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:07:34.235863 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:07:34.235892 kernel: NET: Registered PF_INET protocol family Jul 12 00:07:34.235911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:07:34.235930 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:07:34.235949 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:07:34.235968 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:07:34.235987 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:07:34.236005 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:07:34.236025 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:34.236043 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:07:34.236067 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:07:34.236086 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:07:34.236104 kernel: kvm [1]: HYP mode not available Jul 12 00:07:34.236123 kernel: Initialise system trusted keyrings Jul 12 00:07:34.236143 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:07:34.236193 kernel: Key type asymmetric registered Jul 12 00:07:34.236233 kernel: Asymmetric key parser 'x509' registered Jul 12 00:07:34.236256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:07:34.236276 kernel: io scheduler mq-deadline registered Jul 12 00:07:34.236301 kernel: io scheduler kyber registered Jul 12 00:07:34.236320 kernel: io scheduler bfq registered Jul 12 00:07:34.236582 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:07:34.236613 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:07:34.236633 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:07:34.236653 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:07:34.236672 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:07:34.236691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:07:34.236719 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:07:34.236927 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:07:34.236955 kernel: printk: console [ttyS0] disabled Jul 12 00:07:34.236975 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:07:34.236993 kernel: printk: console [ttyS0] enabled Jul 12 00:07:34.237012 kernel: printk: bootconsole [uart0] disabled Jul 12 00:07:34.237031 kernel: thunder_xcv, ver 1.0 Jul 12 00:07:34.237049 kernel: thunder_bgx, ver 1.0 Jul 12 00:07:34.237067 kernel: nicpf, ver 1.0 Jul 12 00:07:34.237090 kernel: nicvf, ver 1.0 Jul 12 00:07:34.237302 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:07:34.240613 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:07:33 UTC (1752278853) Jul 12 00:07:34.240663 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:07:34.240684 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:07:34.240704 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:07:34.240722 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:07:34.240741 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:07:34.240771 kernel: Segment Routing with IPv6 Jul 12 00:07:34.240790 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:07:34.240809 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:07:34.240827 kernel: Key type dns_resolver registered Jul 12 00:07:34.240846 kernel: registered taskstats version 1 Jul 12 00:07:34.240864 kernel: Loading compiled-in X.509 certificates Jul 12 00:07:34.240883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:07:34.240901 kernel: Key type .fscrypt registered Jul 12 00:07:34.240919 kernel: Key type fscrypt-provisioning registered Jul 12 00:07:34.240938 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:07:34.240961 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:07:34.240980 kernel: ima: No architecture policies found Jul 12 00:07:34.240998 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:07:34.241017 kernel: clk: Disabling unused clocks Jul 12 00:07:34.241035 kernel: Freeing unused kernel memory: 39424K Jul 12 00:07:34.241054 kernel: Run /init as init process Jul 12 00:07:34.241072 kernel: with arguments: Jul 12 00:07:34.241090 kernel: /init Jul 12 00:07:34.241108 kernel: with environment: Jul 12 00:07:34.241131 kernel: HOME=/ Jul 12 00:07:34.241150 kernel: TERM=linux Jul 12 00:07:34.241168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:07:34.241192 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:34.241216 systemd[1]: Detected virtualization amazon. Jul 12 00:07:34.241237 systemd[1]: Detected architecture arm64. Jul 12 00:07:34.241257 systemd[1]: Running in initrd. Jul 12 00:07:34.241281 systemd[1]: No hostname configured, using default hostname. Jul 12 00:07:34.241302 systemd[1]: Hostname set to . Jul 12 00:07:34.241322 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:07:34.241342 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:07:34.241363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:34.241384 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:34.241405 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:07:34.241426 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:34.241451 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:07:34.241494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:07:34.241521 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:07:34.241542 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:07:34.241563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:34.241584 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:34.241604 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:34.241631 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:34.241652 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:34.241672 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:34.241692 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:34.241712 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:34.241733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:07:34.241753 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:07:34.241773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:34.241794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:34.241819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:34.241840 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:34.241860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:07:34.241881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:34.241901 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:07:34.241921 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:07:34.241941 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:34.241962 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:34.241986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:34.242007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:34.242027 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:34.242089 systemd-journald[251]: Collecting audit messages is disabled. Jul 12 00:07:34.242138 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:07:34.242161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:07:34.242182 systemd-journald[251]: Journal started Jul 12 00:07:34.242224 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2565c67a691fec34abf63999cd9245) is 8.0M, max 75.3M, 67.3M free. Jul 12 00:07:34.228601 systemd-modules-load[252]: Inserted module 'overlay' Jul 12 00:07:34.247507 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:34.256316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:34.275072 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:07:34.277387 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 12 00:07:34.282081 kernel: Bridge firewalling registered Jul 12 00:07:34.283059 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:34.295811 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:34.296419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:34.296862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:07:34.318801 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:34.330142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:34.356617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:34.363626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:34.371020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:34.380738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:07:34.391360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:34.403749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:34.426303 dracut-cmdline[286]: dracut-dracut-053 Jul 12 00:07:34.434321 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:07:34.491519 systemd-resolved[290]: Positive Trust Anchors: Jul 12 00:07:34.493682 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:34.496865 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:34.578517 kernel: SCSI subsystem initialized Jul 12 00:07:34.586505 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:07:34.599525 kernel: iscsi: registered transport (tcp) Jul 12 00:07:34.621935 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:07:34.622008 kernel: QLogic iSCSI HBA Driver Jul 12 00:07:34.709505 kernel: random: crng init done Jul 12 00:07:34.709859 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 12 00:07:34.714134 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:34.722099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:34.739952 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:34.747737 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:07:34.791273 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:07:34.791348 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:07:34.791376 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:07:34.858516 kernel: raid6: neonx8 gen() 6766 MB/s Jul 12 00:07:34.875501 kernel: raid6: neonx4 gen() 6581 MB/s Jul 12 00:07:34.892500 kernel: raid6: neonx2 gen() 5460 MB/s Jul 12 00:07:34.909501 kernel: raid6: neonx1 gen() 3974 MB/s Jul 12 00:07:34.926500 kernel: raid6: int64x8 gen() 3818 MB/s Jul 12 00:07:34.943501 kernel: raid6: int64x4 gen() 3720 MB/s Jul 12 00:07:34.960500 kernel: raid6: int64x2 gen() 3600 MB/s Jul 12 00:07:34.978469 kernel: raid6: int64x1 gen() 2753 MB/s Jul 12 00:07:34.978501 kernel: raid6: using algorithm neonx8 gen() 6766 MB/s Jul 12 00:07:34.996456 kernel: raid6: .... xor() 4777 MB/s, rmw enabled Jul 12 00:07:34.996513 kernel: raid6: using neon recovery algorithm Jul 12 00:07:35.005419 kernel: xor: measuring software checksum speed Jul 12 00:07:35.005495 kernel: 8regs : 10970 MB/sec Jul 12 00:07:35.007842 kernel: 32regs : 11415 MB/sec Jul 12 00:07:35.007879 kernel: arm64_neon : 9164 MB/sec Jul 12 00:07:35.007905 kernel: xor: using function: 32regs (11415 MB/sec) Jul 12 00:07:35.093536 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:07:35.112668 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:35.122797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:35.167192 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 12 00:07:35.175197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:35.193760 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:07:35.228799 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 12 00:07:35.285121 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:35.296770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:35.414785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:35.429031 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:07:35.468544 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:35.476052 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:35.485613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:35.488801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:35.502956 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:07:35.548601 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:35.622420 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:07:35.622520 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:07:35.625379 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:35.627956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:35.636271 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:07:35.636612 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:07:35.634045 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:35.642233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:35.660051 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:cc:ee:67:d4:79 Jul 12 00:07:35.642548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:35.645162 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:35.675289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:35.681184 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:07:35.718554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:35.729499 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:07:35.731506 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:07:35.738884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:07:35.746505 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:07:35.760631 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:07:35.760699 kernel: GPT:9289727 != 16777215 Jul 12 00:07:35.760737 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:07:35.760764 kernel: GPT:9289727 != 16777215 Jul 12 00:07:35.762385 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:07:35.764293 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:07:35.777229 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:35.869920 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (533) Jul 12 00:07:35.879559 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (542) Jul 12 00:07:35.963388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 12 00:07:35.986229 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 12 00:07:36.015551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 12 00:07:36.031830 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 12 00:07:36.031982 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 12 00:07:36.054824 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:07:36.068720 disk-uuid[662]: Primary Header is updated. Jul 12 00:07:36.068720 disk-uuid[662]: Secondary Entries is updated. Jul 12 00:07:36.068720 disk-uuid[662]: Secondary Header is updated. Jul 12 00:07:36.080500 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:07:36.088515 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:07:36.108514 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:07:37.100575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:07:37.102962 disk-uuid[663]: The operation has completed successfully. Jul 12 00:07:37.274685 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:07:37.274882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:07:37.334790 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:07:37.347410 sh[1006]: Success Jul 12 00:07:37.373542 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:07:37.488218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:07:37.500676 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:07:37.513541 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:07:37.542340 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:07:37.542401 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:37.544372 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:07:37.544408 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:07:37.546920 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:07:37.660509 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:07:37.696634 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:07:37.701021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:07:37.713710 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:07:37.724852 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:07:37.744843 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:37.744903 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:37.744936 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:07:37.764423 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:07:37.780580 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:07:37.786030 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:37.795814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:07:37.806819 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:07:37.906217 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:37.918814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:37.978091 systemd-networkd[1210]: lo: Link UP Jul 12 00:07:37.978604 systemd-networkd[1210]: lo: Gained carrier Jul 12 00:07:37.981178 systemd-networkd[1210]: Enumeration completed Jul 12 00:07:37.981591 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:37.983838 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:37.983845 systemd-networkd[1210]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:37.988376 systemd[1]: Reached target network.target - Network. Jul 12 00:07:37.993977 systemd-networkd[1210]: eth0: Link UP Jul 12 00:07:37.993985 systemd-networkd[1210]: eth0: Gained carrier Jul 12 00:07:37.994003 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:38.030564 systemd-networkd[1210]: eth0: DHCPv4 address 172.31.28.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:07:38.288199 ignition[1131]: Ignition 2.19.0 Jul 12 00:07:38.288770 ignition[1131]: Stage: fetch-offline Jul 12 00:07:38.291236 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:38.291260 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:38.291866 ignition[1131]: Ignition finished successfully Jul 12 00:07:38.300007 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:38.310764 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:07:38.343888 ignition[1221]: Ignition 2.19.0 Jul 12 00:07:38.345875 ignition[1221]: Stage: fetch Jul 12 00:07:38.346581 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:38.346608 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:38.346756 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:38.363026 ignition[1221]: PUT result: OK Jul 12 00:07:38.366197 ignition[1221]: parsed url from cmdline: "" Jul 12 00:07:38.366325 ignition[1221]: no config URL provided Jul 12 00:07:38.366345 ignition[1221]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:07:38.366962 ignition[1221]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:07:38.367009 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:38.372511 ignition[1221]: PUT result: OK Jul 12 00:07:38.372593 ignition[1221]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:07:38.378039 ignition[1221]: GET result: OK Jul 12 00:07:38.378215 ignition[1221]: parsing config with SHA512: a8e577b95374e09e1432c62c88c03e70ea71cdec584cbf4cfccb29029653d0d89f63271ed4d6b95c26ea6b009c411862e847be257780bc62544e6dbe27e14294 Jul 12 00:07:38.387558 unknown[1221]: fetched base config from "system" Jul 12 00:07:38.388966 unknown[1221]: fetched base config from "system" Jul 12 00:07:38.390149 ignition[1221]: fetch: fetch complete Jul 12 00:07:38.389347 unknown[1221]: fetched user config from "aws" Jul 12 00:07:38.390162 ignition[1221]: fetch: fetch passed Jul 12 00:07:38.395301 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:07:38.390247 ignition[1221]: Ignition finished successfully Jul 12 00:07:38.410774 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:07:38.436774 ignition[1227]: Ignition 2.19.0 Jul 12 00:07:38.436803 ignition[1227]: Stage: kargs Jul 12 00:07:38.437432 ignition[1227]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:38.437459 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:38.437675 ignition[1227]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:38.443493 ignition[1227]: PUT result: OK Jul 12 00:07:38.452369 ignition[1227]: kargs: kargs passed Jul 12 00:07:38.452711 ignition[1227]: Ignition finished successfully Jul 12 00:07:38.462531 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:07:38.472945 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:07:38.497869 ignition[1233]: Ignition 2.19.0 Jul 12 00:07:38.497891 ignition[1233]: Stage: disks Jul 12 00:07:38.499296 ignition[1233]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:38.499322 ignition[1233]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:38.499544 ignition[1233]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:38.508615 ignition[1233]: PUT result: OK Jul 12 00:07:38.519888 ignition[1233]: disks: disks passed Jul 12 00:07:38.520223 ignition[1233]: Ignition finished successfully Jul 12 00:07:38.529169 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:07:38.534751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:38.537337 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:07:38.540046 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:38.544428 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:38.544560 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:38.559806 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:07:38.605108 systemd-fsck[1241]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:07:38.611387 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:07:38.632246 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:07:38.718504 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:07:38.719798 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:07:38.723855 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:38.741684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:38.746685 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:07:38.755428 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:07:38.755552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:07:38.763967 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:38.778496 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1260) Jul 12 00:07:38.783116 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:38.783182 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:38.784568 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:07:38.785420 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:07:38.795755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:07:38.809514 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:07:38.811813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:39.264894 initrd-setup-root[1284]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:07:39.286842 initrd-setup-root[1291]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:07:39.295999 initrd-setup-root[1298]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:07:39.305008 initrd-setup-root[1305]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:07:39.399604 systemd-networkd[1210]: eth0: Gained IPv6LL Jul 12 00:07:39.695967 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:39.706712 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:07:39.718897 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:07:39.730433 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:07:39.734176 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:39.785582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:07:39.787432 ignition[1373]: INFO : Ignition 2.19.0 Jul 12 00:07:39.789450 ignition[1373]: INFO : Stage: mount Jul 12 00:07:39.789450 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.789450 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:39.789450 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:39.802650 ignition[1373]: INFO : PUT result: OK Jul 12 00:07:39.807340 ignition[1373]: INFO : mount: mount passed Jul 12 00:07:39.807340 ignition[1373]: INFO : Ignition finished successfully Jul 12 00:07:39.811046 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:07:39.822753 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:07:39.847833 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:07:39.867508 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1385) Jul 12 00:07:39.871430 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:07:39.871490 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:07:39.871520 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:07:39.878509 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:07:39.881635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:07:39.924372 ignition[1402]: INFO : Ignition 2.19.0 Jul 12 00:07:39.924372 ignition[1402]: INFO : Stage: files Jul 12 00:07:39.932623 ignition[1402]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:39.932623 ignition[1402]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:39.932623 ignition[1402]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:39.940248 ignition[1402]: INFO : PUT result: OK Jul 12 00:07:39.944970 ignition[1402]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:07:39.947896 ignition[1402]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:07:39.947896 ignition[1402]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:07:39.980281 ignition[1402]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:07:39.983383 ignition[1402]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:07:39.983383 ignition[1402]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:07:39.982495 unknown[1402]: wrote ssh authorized keys file for user: core Jul 12 00:07:39.992919 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:07:39.992919 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 12 00:07:40.081106 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:07:40.216536 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 00:07:40.216536 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:40.226125 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:07:40.252881 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:40.252881 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:07:40.252881 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:40.252881 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:40.252881 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:40.252881 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 12 00:07:40.688805 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:07:41.065427 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:07:41.070361 ignition[1402]: INFO : files: files passed Jul 12 00:07:41.070361 ignition[1402]: INFO : Ignition finished successfully Jul 12 00:07:41.101454 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:07:41.115850 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:07:41.123743 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:07:41.137801 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:07:41.140147 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:07:41.161569 initrd-setup-root-after-ignition[1430]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:41.161569 initrd-setup-root-after-ignition[1430]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:41.169662 initrd-setup-root-after-ignition[1434]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:07:41.175587 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:41.178895 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:07:41.195753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:07:41.246603 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:07:41.246898 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:07:41.253690 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:07:41.257603 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:07:41.262386 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:07:41.275831 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:07:41.322326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:41.336789 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:07:41.368264 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:41.373659 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:41.376563 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:07:41.381275 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:07:41.381535 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:07:41.392996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:07:41.395753 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:07:41.399396 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:07:41.402005 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:07:41.410791 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:07:41.413590 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:07:41.420334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:07:41.423345 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:07:41.430115 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:07:41.434513 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:07:41.436455 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:07:41.436705 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:07:41.445269 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:41.447771 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:41.451237 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:07:41.452512 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:41.455851 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:07:41.456438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:07:41.466605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:07:41.466869 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:07:41.471249 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:07:41.471452 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:07:41.486200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:07:41.493054 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:07:41.499043 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:07:41.499338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:41.505567 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:07:41.505911 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:07:41.530835 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:07:41.532792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:07:41.550901 ignition[1454]: INFO : Ignition 2.19.0 Jul 12 00:07:41.550901 ignition[1454]: INFO : Stage: umount Jul 12 00:07:41.550901 ignition[1454]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:07:41.550901 ignition[1454]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:07:41.550901 ignition[1454]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:07:41.563697 ignition[1454]: INFO : PUT result: OK Jul 12 00:07:41.561829 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:07:41.576193 ignition[1454]: INFO : umount: umount passed Jul 12 00:07:41.576193 ignition[1454]: INFO : Ignition finished successfully Jul 12 00:07:41.582875 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:07:41.584456 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:07:41.589460 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:07:41.589673 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:07:41.596628 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:07:41.596777 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:07:41.604312 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:07:41.604415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:07:41.607095 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:07:41.607176 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:07:41.610943 systemd[1]: Stopped target network.target - Network. Jul 12 00:07:41.614740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:07:41.614846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:07:41.617703 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:07:41.621226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:07:41.623527 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:41.627893 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:07:41.627973 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:07:41.628368 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:07:41.628443 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:07:41.629056 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:07:41.629123 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:07:41.629387 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:07:41.629662 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:07:41.642015 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:07:41.642188 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:07:41.653298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:07:41.653531 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:07:41.676247 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:07:41.678639 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:07:41.693584 systemd-networkd[1210]: eth0: DHCPv6 lease lost Jul 12 00:07:41.697419 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:07:41.699605 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:07:41.704412 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:07:41.704641 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:07:41.714457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:07:41.715604 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:41.726756 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:07:41.730826 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:07:41.731077 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:07:41.742100 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:07:41.743106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:41.746598 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:07:41.746702 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:41.749136 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:07:41.749224 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:41.753663 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:41.783116 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:07:41.783636 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:41.793296 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:07:41.793407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:41.795535 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:07:41.795816 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:41.796096 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:07:41.796181 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:07:41.796854 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:07:41.796930 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:07:41.802667 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:07:41.807117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:07:41.815055 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:07:41.815859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:07:41.818628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:41.824584 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:07:41.824894 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:07:41.825537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:07:41.829879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:41.833596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:07:41.833716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:41.860808 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:07:41.861024 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:07:41.867912 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:07:41.868249 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:07:41.874335 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:07:41.884824 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:07:41.911794 systemd[1]: Switching root. Jul 12 00:07:41.958894 systemd-journald[251]: Journal stopped Jul 12 00:07:44.548097 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 12 00:07:44.548248 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:07:44.548292 kernel: SELinux: policy capability open_perms=1 Jul 12 00:07:44.548323 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:07:44.548352 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:07:44.548381 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:07:44.548412 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:07:44.548442 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:07:44.548515 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:07:44.548549 kernel: audit: type=1403 audit(1752278862.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:07:44.548583 systemd[1]: Successfully loaded SELinux policy in 87.957ms. Jul 12 00:07:44.548628 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.928ms. Jul 12 00:07:44.548664 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:07:44.548696 systemd[1]: Detected virtualization amazon. Jul 12 00:07:44.548728 systemd[1]: Detected architecture arm64. Jul 12 00:07:44.548758 systemd[1]: Detected first boot. Jul 12 00:07:44.548787 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:07:44.548825 zram_generator::config[1497]: No configuration found. Jul 12 00:07:44.548859 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:07:44.548892 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:07:44.548924 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:07:44.548959 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:07:44.548995 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:07:44.549025 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:07:44.549058 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:07:44.549093 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:07:44.549127 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:07:44.549158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:07:44.549188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:07:44.549228 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:07:44.549261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:07:44.549293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:07:44.549322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:07:44.549352 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:07:44.549388 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:07:44.549418 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:07:44.549450 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 00:07:44.552570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:07:44.552619 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:07:44.552653 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:07:44.552684 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:07:44.552735 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:07:44.552770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:07:44.552804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:07:44.552837 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:07:44.552870 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:07:44.552901 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:07:44.552931 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:07:44.552966 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:07:44.553001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:07:44.553033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:07:44.553073 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:07:44.553107 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:07:44.553141 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:07:44.553172 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:07:44.553205 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:07:44.553241 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:07:44.553273 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:07:44.553307 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:07:44.553356 systemd[1]: Reached target machines.target - Containers. Jul 12 00:07:44.553389 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:07:44.553423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:44.553510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:07:44.553553 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:07:44.553583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:44.553613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:44.553643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:44.553673 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:07:44.553710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:44.553740 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:07:44.553770 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:07:44.553798 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:07:44.553827 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:07:44.553857 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:07:44.553886 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:07:44.553918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:07:44.553947 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:07:44.553980 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:07:44.554011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:07:44.554042 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:07:44.554072 systemd[1]: Stopped verity-setup.service. Jul 12 00:07:44.554102 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:07:44.554133 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:07:44.554162 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:07:44.554193 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:07:44.554228 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:07:44.554257 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:07:44.554286 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:07:44.554315 kernel: fuse: init (API version 7.39) Jul 12 00:07:44.554346 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:07:44.554379 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:07:44.554409 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:07:44.554441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:44.556539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:44.556606 kernel: loop: module loaded Jul 12 00:07:44.556642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:44.556675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:44.556707 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:07:44.556737 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:07:44.556776 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:44.556810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:44.556840 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:07:44.556872 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:07:44.556902 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:07:44.556936 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:07:44.556969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:07:44.557053 systemd-journald[1589]: Collecting audit messages is disabled. Jul 12 00:07:44.557108 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:07:44.557138 systemd-journald[1589]: Journal started Jul 12 00:07:44.557185 systemd-journald[1589]: Runtime Journal (/run/log/journal/ec2565c67a691fec34abf63999cd9245) is 8.0M, max 75.3M, 67.3M free. Jul 12 00:07:43.881616 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:07:43.937710 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 12 00:07:43.938492 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:07:44.568309 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:07:44.568389 kernel: ACPI: bus type drm_connector registered Jul 12 00:07:44.568425 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:07:44.578246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:07:44.594522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:07:44.607831 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:07:44.611513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:44.625533 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:07:44.637847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:44.654919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:07:44.655008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:44.672533 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:07:44.689335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:07:44.703127 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:07:44.718342 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:07:44.712371 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:44.712817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:44.716971 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:07:44.720009 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:07:44.724666 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:07:44.729604 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:07:44.768342 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:07:44.785451 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:07:44.804015 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:07:44.815811 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:07:44.855650 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:07:44.859278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:07:44.872602 systemd-journald[1589]: Time spent on flushing to /var/log/journal/ec2565c67a691fec34abf63999cd9245 is 45.750ms for 914 entries. Jul 12 00:07:44.872602 systemd-journald[1589]: System Journal (/var/log/journal/ec2565c67a691fec34abf63999cd9245) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:07:44.928720 systemd-journald[1589]: Received client request to flush runtime journal. Jul 12 00:07:44.874940 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:07:44.887007 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:07:44.891909 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:07:44.906886 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jul 12 00:07:44.906911 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jul 12 00:07:44.938211 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:07:44.944322 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:07:44.951020 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:07:44.956833 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:07:44.974048 udevadm[1638]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:07:45.000677 kernel: loop1: detected capacity change from 0 to 114328 Jul 12 00:07:45.039137 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:07:45.053711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:07:45.098286 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Jul 12 00:07:45.098342 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Jul 12 00:07:45.106541 kernel: loop2: detected capacity change from 0 to 52536 Jul 12 00:07:45.111210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:07:45.227662 kernel: loop3: detected capacity change from 0 to 211168 Jul 12 00:07:45.381507 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:07:45.396514 kernel: loop5: detected capacity change from 0 to 114328 Jul 12 00:07:45.412554 kernel: loop6: detected capacity change from 0 to 52536 Jul 12 00:07:45.431352 kernel: loop7: detected capacity change from 0 to 211168 Jul 12 00:07:45.456576 (sd-merge)[1654]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 12 00:07:45.457580 (sd-merge)[1654]: Merged extensions into '/usr'. Jul 12 00:07:45.464241 systemd[1]: Reloading requested from client PID 1608 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:07:45.464267 systemd[1]: Reloading... Jul 12 00:07:45.676578 zram_generator::config[1680]: No configuration found. Jul 12 00:07:46.020870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:46.132031 systemd[1]: Reloading finished in 666 ms. Jul 12 00:07:46.172101 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:07:46.178159 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:07:46.192825 systemd[1]: Starting ensure-sysext.service... Jul 12 00:07:46.203847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:46.223846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:46.231939 systemd[1]: Reloading requested from client PID 1732 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:07:46.231972 systemd[1]: Reloading... Jul 12 00:07:46.292574 ldconfig[1604]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:07:46.292045 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:07:46.292771 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:07:46.294595 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:07:46.295166 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Jul 12 00:07:46.295301 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Jul 12 00:07:46.308844 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:46.308872 systemd-tmpfiles[1733]: Skipping /boot Jul 12 00:07:46.353308 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:46.353341 systemd-tmpfiles[1733]: Skipping /boot Jul 12 00:07:46.359741 systemd-udevd[1734]: Using default interface naming scheme 'v255'. Jul 12 00:07:46.445496 zram_generator::config[1764]: No configuration found. Jul 12 00:07:46.580153 (udev-worker)[1794]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:07:46.827277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:46.859520 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1774) Jul 12 00:07:47.014598 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 00:07:47.015086 systemd[1]: Reloading finished in 782 ms. Jul 12 00:07:47.047118 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:47.054363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:07:47.066462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:47.168407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 12 00:07:47.174017 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:07:47.187061 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:47.206898 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:07:47.209937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:47.226967 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:07:47.235571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:47.244212 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:47.253062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:47.255663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:47.260999 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:07:47.268033 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:07:47.278146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:47.294139 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:47.295681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:47.305981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:07:47.314014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:47.335391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:47.346009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:47.348623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:47.348991 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:07:47.384561 systemd[1]: Finished ensure-sysext.service. Jul 12 00:07:47.387635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:47.387915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:47.395814 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:07:47.400058 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:47.409179 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:07:47.413118 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:47.438958 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:07:47.442567 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:47.442978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:47.471043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:47.471497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:47.474272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:47.485301 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:47.486993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:47.492082 lvm[1957]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:47.493731 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:07:47.499414 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:07:47.521587 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:07:47.537575 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:07:47.556815 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:07:47.570752 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:07:47.587453 augenrules[1974]: No rules Jul 12 00:07:47.588117 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:07:47.598355 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:47.623791 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:07:47.628613 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:07:47.679030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:47.764305 systemd-networkd[1944]: lo: Link UP Jul 12 00:07:47.764327 systemd-networkd[1944]: lo: Gained carrier Jul 12 00:07:47.767063 systemd-networkd[1944]: Enumeration completed Jul 12 00:07:47.767237 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:47.770135 systemd-networkd[1944]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:47.770144 systemd-networkd[1944]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:47.772781 systemd-networkd[1944]: eth0: Link UP Jul 12 00:07:47.773074 systemd-networkd[1944]: eth0: Gained carrier Jul 12 00:07:47.773108 systemd-networkd[1944]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:47.777845 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:07:47.781294 systemd-resolved[1945]: Positive Trust Anchors: Jul 12 00:07:47.781331 systemd-resolved[1945]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:47.781394 systemd-resolved[1945]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:47.785624 systemd-networkd[1944]: eth0: DHCPv4 address 172.31.28.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:07:47.799848 systemd-resolved[1945]: Defaulting to hostname 'linux'. Jul 12 00:07:47.803208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:47.805872 systemd[1]: Reached target network.target - Network. Jul 12 00:07:47.808922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:47.811524 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:47.814130 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:07:47.816809 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:07:47.819985 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:07:47.822487 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:07:47.825192 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:07:47.827929 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:07:47.828107 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:47.830138 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:47.832895 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:07:47.838001 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:07:47.849651 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:07:47.852910 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:07:47.855722 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:47.857790 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:47.859813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:47.859866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:47.863207 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:07:47.870866 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:07:47.886817 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:07:47.905356 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:07:47.912098 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:07:47.914617 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:07:47.931765 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:07:47.943635 jq[1998]: false Jul 12 00:07:47.940739 systemd[1]: Started ntpd.service - Network Time Service. Jul 12 00:07:47.946888 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:07:47.953192 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 12 00:07:47.966733 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:07:47.974846 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:07:47.984449 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:07:47.988457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:07:47.991423 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:07:47.999766 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:07:48.007715 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:07:48.013173 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:07:48.016559 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:07:48.027880 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:07:48.028414 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:07:48.069909 jq[2010]: true Jul 12 00:07:48.076875 dbus-daemon[1997]: [system] SELinux support is enabled Jul 12 00:07:48.084987 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:07:48.092832 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:07:48.092899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:07:48.095744 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:07:48.095779 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:07:48.110423 dbus-daemon[1997]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1944 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:07:48.112184 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:07:48.132099 jq[2027]: true Jul 12 00:07:48.128917 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 12 00:07:48.139719 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:07:48.141672 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:07:48.166186 tar[2021]: linux-arm64/LICENSE Jul 12 00:07:48.166186 tar[2021]: linux-arm64/helm Jul 12 00:07:48.202410 (ntainerd)[2033]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:07:48.223647 extend-filesystems[1999]: Found loop4 Jul 12 00:07:48.226502 extend-filesystems[1999]: Found loop5 Jul 12 00:07:48.230672 extend-filesystems[1999]: Found loop6 Jul 12 00:07:48.232625 extend-filesystems[1999]: Found loop7 Jul 12 00:07:48.232625 extend-filesystems[1999]: Found nvme0n1 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p1 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p2 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p3 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found usr Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p4 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p6 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p7 Jul 12 00:07:48.240871 extend-filesystems[1999]: Found nvme0n1p9 Jul 12 00:07:48.240871 extend-filesystems[1999]: Checking size of /dev/nvme0n1p9 Jul 12 00:07:48.275608 ntpd[2001]: ntpd 4.2.8p17@1.4004-o Fri Jul 11 22:05:17 UTC 2025 (1): Starting Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: ntpd 4.2.8p17@1.4004-o Fri Jul 11 22:05:17 UTC 2025 (1): Starting Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: ---------------------------------------------------- Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: ntp-4 is maintained by Network Time Foundation, Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: corporation. Support and training for ntp-4 are Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: available at https://www.nwtime.org/support Jul 12 00:07:48.278563 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: ---------------------------------------------------- Jul 12 00:07:48.275663 ntpd[2001]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 12 00:07:48.275684 ntpd[2001]: ---------------------------------------------------- Jul 12 00:07:48.275703 ntpd[2001]: ntp-4 is maintained by Network Time Foundation, Jul 12 00:07:48.275723 ntpd[2001]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 12 00:07:48.275743 ntpd[2001]: corporation. Support and training for ntp-4 are Jul 12 00:07:48.275763 ntpd[2001]: available at https://www.nwtime.org/support Jul 12 00:07:48.275782 ntpd[2001]: ---------------------------------------------------- Jul 12 00:07:48.285128 update_engine[2009]: I20250712 00:07:48.278100 2009 main.cc:92] Flatcar Update Engine starting Jul 12 00:07:48.285636 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: proto: precision = 0.096 usec (-23) Jul 12 00:07:48.284893 ntpd[2001]: proto: precision = 0.096 usec (-23) Jul 12 00:07:48.286714 ntpd[2001]: basedate set to 2025-06-29 Jul 12 00:07:48.291839 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: basedate set to 2025-06-29 Jul 12 00:07:48.291839 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: gps base set to 2025-06-29 (week 2373) Jul 12 00:07:48.291839 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Listen and drop on 0 v6wildcard [::]:123 Jul 12 00:07:48.291839 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 12 00:07:48.286755 ntpd[2001]: gps base set to 2025-06-29 (week 2373) Jul 12 00:07:48.292074 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Listen normally on 2 lo 127.0.0.1:123 Jul 12 00:07:48.292074 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Listen normally on 3 eth0 172.31.28.146:123 Jul 12 00:07:48.292074 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Listen normally on 4 lo [::1]:123 Jul 12 00:07:48.292074 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: bind(21) AF_INET6 fe80::4cc:eeff:fe67:d479%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:48.291459 ntpd[2001]: Listen and drop on 0 v6wildcard [::]:123 Jul 12 00:07:48.292286 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: unable to create socket on eth0 (5) for fe80::4cc:eeff:fe67:d479%2#123 Jul 12 00:07:48.292286 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: failed to init interface for address fe80::4cc:eeff:fe67:d479%2 Jul 12 00:07:48.292286 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: Listening on routing socket on fd #21 for interface updates Jul 12 00:07:48.291562 ntpd[2001]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 12 00:07:48.291846 ntpd[2001]: Listen normally on 2 lo 127.0.0.1:123 Jul 12 00:07:48.291919 ntpd[2001]: Listen normally on 3 eth0 172.31.28.146:123 Jul 12 00:07:48.291986 ntpd[2001]: Listen normally on 4 lo [::1]:123 Jul 12 00:07:48.292059 ntpd[2001]: bind(21) AF_INET6 fe80::4cc:eeff:fe67:d479%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:48.292098 ntpd[2001]: unable to create socket on eth0 (5) for fe80::4cc:eeff:fe67:d479%2#123 Jul 12 00:07:48.292126 ntpd[2001]: failed to init interface for address fe80::4cc:eeff:fe67:d479%2 Jul 12 00:07:48.292177 ntpd[2001]: Listening on routing socket on fd #21 for interface updates Jul 12 00:07:48.302185 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:07:48.311869 update_engine[2009]: I20250712 00:07:48.311772 2009 update_check_scheduler.cc:74] Next update check in 2m17s Jul 12 00:07:48.322756 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:07:48.339183 ntpd[2001]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:48.339828 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:48.339828 ntpd[2001]: 12 Jul 00:07:48 ntpd[2001]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:48.339249 ntpd[2001]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:48.342250 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 12 00:07:48.389669 extend-filesystems[1999]: Resized partition /dev/nvme0n1p9 Jul 12 00:07:48.397170 coreos-metadata[1996]: Jul 12 00:07:48.396 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:07:48.399776 extend-filesystems[2062]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:07:48.418504 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:07:48.428351 coreos-metadata[1996]: Jul 12 00:07:48.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 12 00:07:48.428351 coreos-metadata[1996]: Jul 12 00:07:48.427 INFO Fetch successful Jul 12 00:07:48.428351 coreos-metadata[1996]: Jul 12 00:07:48.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 12 00:07:48.435897 coreos-metadata[1996]: Jul 12 00:07:48.435 INFO Fetch successful Jul 12 00:07:48.436238 coreos-metadata[1996]: Jul 12 00:07:48.435 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 12 00:07:48.440398 coreos-metadata[1996]: Jul 12 00:07:48.440 INFO Fetch successful Jul 12 00:07:48.440398 coreos-metadata[1996]: Jul 12 00:07:48.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 12 00:07:48.443265 coreos-metadata[1996]: Jul 12 00:07:48.442 INFO Fetch successful Jul 12 00:07:48.443265 coreos-metadata[1996]: Jul 12 00:07:48.442 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 12 00:07:48.444340 coreos-metadata[1996]: Jul 12 00:07:48.444 INFO Fetch failed with 404: resource not found Jul 12 00:07:48.444340 coreos-metadata[1996]: Jul 12 00:07:48.444 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 12 00:07:48.454859 coreos-metadata[1996]: Jul 12 00:07:48.454 INFO Fetch successful Jul 12 00:07:48.454859 coreos-metadata[1996]: Jul 12 00:07:48.454 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.456 INFO Fetch successful Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.456 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.464 INFO Fetch successful Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.464 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.468 INFO Fetch successful Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.468 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 12 00:07:48.525912 coreos-metadata[1996]: Jul 12 00:07:48.474 INFO Fetch successful Jul 12 00:07:48.576573 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:07:48.558278 systemd-logind[2008]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:07:48.558320 systemd-logind[2008]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:07:48.560880 systemd-logind[2008]: New seat seat0. Jul 12 00:07:48.576713 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:07:48.583403 extend-filesystems[2062]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:07:48.583403 extend-filesystems[2062]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:07:48.583403 extend-filesystems[2062]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:07:48.598974 extend-filesystems[1999]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:07:48.603313 bash[2066]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:48.588773 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:07:48.591657 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:07:48.605246 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:07:48.633640 systemd[1]: Starting sshkeys.service... Jul 12 00:07:48.637309 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:07:48.644387 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:07:48.693657 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:07:48.737656 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:07:48.741444 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:07:48.745458 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 12 00:07:48.752376 dbus-daemon[1997]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:07:48.760139 systemd[1]: Starting polkit.service - Authorization Manager... Jul 12 00:07:48.843523 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1778) Jul 12 00:07:48.875335 locksmithd[2048]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:07:48.889712 polkitd[2091]: Started polkitd version 121 Jul 12 00:07:48.961018 polkitd[2091]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:07:48.961133 polkitd[2091]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:07:48.974064 polkitd[2091]: Finished loading, compiling and executing 2 rules Jul 12 00:07:48.984448 dbus-daemon[1997]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:07:48.985313 systemd[1]: Started polkit.service - Authorization Manager. Jul 12 00:07:48.988819 polkitd[2091]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:07:49.031161 systemd-resolved[1945]: System hostname changed to 'ip-172-31-28-146'. Jul 12 00:07:49.031707 systemd-hostnamed[2029]: Hostname set to (transient) Jul 12 00:07:49.080726 coreos-metadata[2090]: Jul 12 00:07:49.080 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:07:49.083072 coreos-metadata[2090]: Jul 12 00:07:49.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 12 00:07:49.084862 coreos-metadata[2090]: Jul 12 00:07:49.084 INFO Fetch successful Jul 12 00:07:49.084862 coreos-metadata[2090]: Jul 12 00:07:49.084 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:07:49.090017 coreos-metadata[2090]: Jul 12 00:07:49.089 INFO Fetch successful Jul 12 00:07:49.108757 unknown[2090]: wrote ssh authorized keys file for user: core Jul 12 00:07:49.189752 update-ssh-keys[2177]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:49.200164 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:07:49.209341 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:07:49.228372 systemd[1]: Finished sshkeys.service. Jul 12 00:07:49.276827 ntpd[2001]: bind(24) AF_INET6 fe80::4cc:eeff:fe67:d479%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:49.278083 ntpd[2001]: 12 Jul 00:07:49 ntpd[2001]: bind(24) AF_INET6 fe80::4cc:eeff:fe67:d479%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:49.278083 ntpd[2001]: 12 Jul 00:07:49 ntpd[2001]: unable to create socket on eth0 (6) for fe80::4cc:eeff:fe67:d479%2#123 Jul 12 00:07:49.278083 ntpd[2001]: 12 Jul 00:07:49 ntpd[2001]: failed to init interface for address fe80::4cc:eeff:fe67:d479%2 Jul 12 00:07:49.277840 ntpd[2001]: unable to create socket on eth0 (6) for fe80::4cc:eeff:fe67:d479%2#123 Jul 12 00:07:49.277876 ntpd[2001]: failed to init interface for address fe80::4cc:eeff:fe67:d479%2 Jul 12 00:07:49.315108 containerd[2033]: time="2025-07-12T00:07:49.312860110Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:07:49.383673 systemd-networkd[1944]: eth0: Gained IPv6LL Jul 12 00:07:49.391163 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:07:49.395636 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:07:49.409427 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 12 00:07:49.420860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:49.427166 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:07:49.498842 containerd[2033]: time="2025-07-12T00:07:49.498748343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510162143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510244631Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510282407Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510625895Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510662939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510806843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:49.511117 containerd[2033]: time="2025-07-12T00:07:49.510839723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.514508 containerd[2033]: time="2025-07-12T00:07:49.512791895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:49.514508 containerd[2033]: time="2025-07-12T00:07:49.512854847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.514508 containerd[2033]: time="2025-07-12T00:07:49.512890619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:49.514508 containerd[2033]: time="2025-07-12T00:07:49.512918447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.514508 containerd[2033]: time="2025-07-12T00:07:49.513153731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.514508 containerd[2033]: time="2025-07-12T00:07:49.513601199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:49.516048 containerd[2033]: time="2025-07-12T00:07:49.515984051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:49.516140 containerd[2033]: time="2025-07-12T00:07:49.516048623Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:07:49.516524 containerd[2033]: time="2025-07-12T00:07:49.516269087Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:07:49.516524 containerd[2033]: time="2025-07-12T00:07:49.516387167Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:07:49.524990 containerd[2033]: time="2025-07-12T00:07:49.524914151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:07:49.525135 containerd[2033]: time="2025-07-12T00:07:49.525032735Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:07:49.525135 containerd[2033]: time="2025-07-12T00:07:49.525074087Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:07:49.525135 containerd[2033]: time="2025-07-12T00:07:49.525111611Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:07:49.525259 containerd[2033]: time="2025-07-12T00:07:49.525160919Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:07:49.528423 containerd[2033]: time="2025-07-12T00:07:49.528346715Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.532070339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.532504115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.532543715Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.532618847Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.532906007Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.532945223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.533002091Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.533042063Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.534576 containerd[2033]: time="2025-07-12T00:07:49.533103203Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.535809 containerd[2033]: time="2025-07-12T00:07:49.533135351Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.535809 containerd[2033]: time="2025-07-12T00:07:49.535627751Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.535809 containerd[2033]: time="2025-07-12T00:07:49.535691183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:07:49.535809 containerd[2033]: time="2025-07-12T00:07:49.535775723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.536024 containerd[2033]: time="2025-07-12T00:07:49.535811579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.536024 containerd[2033]: time="2025-07-12T00:07:49.535870319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.536024 containerd[2033]: time="2025-07-12T00:07:49.535905587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.536024 containerd[2033]: time="2025-07-12T00:07:49.535962539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.536204 containerd[2033]: time="2025-07-12T00:07:49.535995815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.536204 containerd[2033]: time="2025-07-12T00:07:49.536050943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537462 containerd[2033]: time="2025-07-12T00:07:49.536083295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537462 containerd[2033]: time="2025-07-12T00:07:49.537505427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537733 containerd[2033]: time="2025-07-12T00:07:49.537553307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537733 containerd[2033]: time="2025-07-12T00:07:49.537616367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537733 containerd[2033]: time="2025-07-12T00:07:49.537675707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537733 containerd[2033]: time="2025-07-12T00:07:49.537712379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537895 containerd[2033]: time="2025-07-12T00:07:49.537774995Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:07:49.537895 containerd[2033]: time="2025-07-12T00:07:49.537851963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.537895 containerd[2033]: time="2025-07-12T00:07:49.537885683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.538241903Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.538750859Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.538829303Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.538861151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.538920659Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.538948703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.539248559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.539275091Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:07:49.540671 containerd[2033]: time="2025-07-12T00:07:49.539327147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:07:49.542964 containerd[2033]: time="2025-07-12T00:07:49.542736887Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:07:49.543229 containerd[2033]: time="2025-07-12T00:07:49.542948555Z" level=info msg="Connect containerd service" Jul 12 00:07:49.543229 containerd[2033]: time="2025-07-12T00:07:49.543039791Z" level=info msg="using legacy CRI server" Jul 12 00:07:49.543319 containerd[2033]: time="2025-07-12T00:07:49.543058607Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:07:49.544517 containerd[2033]: time="2025-07-12T00:07:49.543427007Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.547983011Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.548329343Z" level=info msg="Start subscribing containerd event" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.548415467Z" level=info msg="Start recovering state" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.548569403Z" level=info msg="Start event monitor" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.548597771Z" level=info msg="Start snapshots syncer" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.548645939Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:07:49.549319 containerd[2033]: time="2025-07-12T00:07:49.548665355Z" level=info msg="Start streaming server" Jul 12 00:07:49.551671 containerd[2033]: time="2025-07-12T00:07:49.551460815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:07:49.551671 containerd[2033]: time="2025-07-12T00:07:49.551637431Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:07:49.553639 containerd[2033]: time="2025-07-12T00:07:49.552434039Z" level=info msg="containerd successfully booted in 0.245969s" Jul 12 00:07:49.552589 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:07:49.566139 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:07:49.647566 amazon-ssm-agent[2200]: Initializing new seelog logger Jul 12 00:07:49.647566 amazon-ssm-agent[2200]: New Seelog Logger Creation Complete Jul 12 00:07:49.647566 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.647566 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.649320 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 processing appconfig overrides Jul 12 00:07:49.649521 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.651519 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.651519 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 processing appconfig overrides Jul 12 00:07:49.651519 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.651519 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.651519 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 processing appconfig overrides Jul 12 00:07:49.653281 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO Proxy environment variables: Jul 12 00:07:49.659859 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.659859 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:49.659859 amazon-ssm-agent[2200]: 2025/07/12 00:07:49 processing appconfig overrides Jul 12 00:07:49.754794 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO no_proxy: Jul 12 00:07:49.854527 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO https_proxy: Jul 12 00:07:49.885948 sshd_keygen[2043]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:07:49.954050 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO http_proxy: Jul 12 00:07:49.960183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:07:49.973617 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:07:49.986009 systemd[1]: Started sshd@0-172.31.28.146:22-139.178.89.65:59346.service - OpenSSH per-connection server daemon (139.178.89.65:59346). Jul 12 00:07:50.041025 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:07:50.042728 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:07:50.053017 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:07:50.056588 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO Checking if agent identity type OnPrem can be assumed Jul 12 00:07:50.112346 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:07:50.127323 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:07:50.139670 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 00:07:50.142412 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:07:50.156024 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO Checking if agent identity type EC2 can be assumed Jul 12 00:07:50.253647 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO Agent will take identity from EC2 Jul 12 00:07:50.260002 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 59346 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:50.265313 tar[2021]: linux-arm64/README.md Jul 12 00:07:50.272131 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:50.309553 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:07:50.316784 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:07:50.329352 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:07:50.343128 systemd-logind[2008]: New session 1 of user core. Jul 12 00:07:50.353660 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:50.374029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:07:50.390657 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:07:50.414226 (systemd)[2243]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:07:50.454489 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:50.552212 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:50.651565 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 12 00:07:50.679425 systemd[2243]: Queued start job for default target default.target. Jul 12 00:07:50.688866 systemd[2243]: Created slice app.slice - User Application Slice. Jul 12 00:07:50.689065 systemd[2243]: Reached target paths.target - Paths. Jul 12 00:07:50.689217 systemd[2243]: Reached target timers.target - Timers. Jul 12 00:07:50.697756 systemd[2243]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:07:50.718070 systemd[2243]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:07:50.718181 systemd[2243]: Reached target sockets.target - Sockets. Jul 12 00:07:50.718213 systemd[2243]: Reached target basic.target - Basic System. Jul 12 00:07:50.718296 systemd[2243]: Reached target default.target - Main User Target. Jul 12 00:07:50.718358 systemd[2243]: Startup finished in 290ms. Jul 12 00:07:50.718571 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:07:50.728352 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:07:50.752655 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 12 00:07:50.853514 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] Starting Core Agent Jul 12 00:07:50.906858 systemd[1]: Started sshd@1-172.31.28.146:22-139.178.89.65:59350.service - OpenSSH per-connection server daemon (139.178.89.65:59350). Jul 12 00:07:50.952986 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 12 00:07:51.053662 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [Registrar] Starting registrar module Jul 12 00:07:51.132272 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 59350 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:51.135552 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:51.144285 systemd-logind[2008]: New session 2 of user core. Jul 12 00:07:51.147268 amazon-ssm-agent[2200]: 2025-07-12 00:07:49 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 12 00:07:51.147268 amazon-ssm-agent[2200]: 2025-07-12 00:07:51 INFO [EC2Identity] EC2 registration was successful. Jul 12 00:07:51.147268 amazon-ssm-agent[2200]: 2025-07-12 00:07:51 INFO [CredentialRefresher] credentialRefresher has started Jul 12 00:07:51.147268 amazon-ssm-agent[2200]: 2025-07-12 00:07:51 INFO [CredentialRefresher] Starting credentials refresher loop Jul 12 00:07:51.147268 amazon-ssm-agent[2200]: 2025-07-12 00:07:51 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 12 00:07:51.151779 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:07:51.155292 amazon-ssm-agent[2200]: 2025-07-12 00:07:51 INFO [CredentialRefresher] Next credential rotation will be in 30.883312734233332 minutes Jul 12 00:07:51.281670 sshd[2254]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:51.286890 systemd[1]: sshd@1-172.31.28.146:22-139.178.89.65:59350.service: Deactivated successfully. Jul 12 00:07:51.289577 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:07:51.293730 systemd-logind[2008]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:07:51.295868 systemd-logind[2008]: Removed session 2. Jul 12 00:07:51.323677 systemd[1]: Started sshd@2-172.31.28.146:22-139.178.89.65:59352.service - OpenSSH per-connection server daemon (139.178.89.65:59352). Jul 12 00:07:51.501313 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 59352 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:51.504799 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:51.515586 systemd-logind[2008]: New session 3 of user core. Jul 12 00:07:51.522779 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:07:51.652080 sshd[2261]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:51.658904 systemd[1]: sshd@2-172.31.28.146:22-139.178.89.65:59352.service: Deactivated successfully. Jul 12 00:07:51.663032 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:07:51.665304 systemd-logind[2008]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:07:51.668233 systemd-logind[2008]: Removed session 3. Jul 12 00:07:52.174796 amazon-ssm-agent[2200]: 2025-07-12 00:07:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 12 00:07:52.230726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:52.234144 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:07:52.236840 systemd[1]: Startup finished in 1.175s (kernel) + 8.653s (initrd) + 9.823s (userspace) = 19.652s. Jul 12 00:07:52.258645 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:52.276714 ntpd[2001]: Listen normally on 7 eth0 [fe80::4cc:eeff:fe67:d479%2]:123 Jul 12 00:07:52.278156 amazon-ssm-agent[2200]: 2025-07-12 00:07:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jul 12 00:07:52.279808 ntpd[2001]: 12 Jul 00:07:52 ntpd[2001]: Listen normally on 7 eth0 [fe80::4cc:eeff:fe67:d479%2]:123 Jul 12 00:07:52.378497 amazon-ssm-agent[2200]: 2025-07-12 00:07:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 12 00:07:53.592633 kubelet[2277]: E0712 00:07:53.592543 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:53.598048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:53.598397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:53.598961 systemd[1]: kubelet.service: Consumed 1.401s CPU time. Jul 12 00:07:55.713033 systemd-resolved[1945]: Clock change detected. Flushing caches. Jul 12 00:08:02.128119 systemd[1]: Started sshd@3-172.31.28.146:22-139.178.89.65:46556.service - OpenSSH per-connection server daemon (139.178.89.65:46556). Jul 12 00:08:02.295417 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 46556 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:02.298076 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:02.305470 systemd-logind[2008]: New session 4 of user core. Jul 12 00:08:02.316857 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:08:02.443914 sshd[2295]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:02.450485 systemd[1]: sshd@3-172.31.28.146:22-139.178.89.65:46556.service: Deactivated successfully. Jul 12 00:08:02.454244 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:08:02.455872 systemd-logind[2008]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:08:02.457563 systemd-logind[2008]: Removed session 4. Jul 12 00:08:02.493109 systemd[1]: Started sshd@4-172.31.28.146:22-139.178.89.65:46566.service - OpenSSH per-connection server daemon (139.178.89.65:46566). Jul 12 00:08:02.662118 sshd[2302]: Accepted publickey for core from 139.178.89.65 port 46566 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:02.664720 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:02.674688 systemd-logind[2008]: New session 5 of user core. Jul 12 00:08:02.680891 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:08:02.800662 sshd[2302]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:02.807002 systemd[1]: sshd@4-172.31.28.146:22-139.178.89.65:46566.service: Deactivated successfully. Jul 12 00:08:02.810274 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:08:02.812013 systemd-logind[2008]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:08:02.813661 systemd-logind[2008]: Removed session 5. Jul 12 00:08:02.844071 systemd[1]: Started sshd@5-172.31.28.146:22-139.178.89.65:46576.service - OpenSSH per-connection server daemon (139.178.89.65:46576). Jul 12 00:08:03.005459 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 46576 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:03.008048 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:03.015455 systemd-logind[2008]: New session 6 of user core. Jul 12 00:08:03.027856 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:08:03.153953 sshd[2309]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:03.160106 systemd[1]: sshd@5-172.31.28.146:22-139.178.89.65:46576.service: Deactivated successfully. Jul 12 00:08:03.163313 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:08:03.165062 systemd-logind[2008]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:08:03.166693 systemd-logind[2008]: Removed session 6. Jul 12 00:08:03.192147 systemd[1]: Started sshd@6-172.31.28.146:22-139.178.89.65:46592.service - OpenSSH per-connection server daemon (139.178.89.65:46592). Jul 12 00:08:03.360789 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 46592 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:03.363434 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:03.372924 systemd-logind[2008]: New session 7 of user core. Jul 12 00:08:03.375921 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:08:03.494896 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:08:03.495545 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:03.511636 sudo[2319]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:03.535465 sshd[2316]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:03.542522 systemd[1]: sshd@6-172.31.28.146:22-139.178.89.65:46592.service: Deactivated successfully. Jul 12 00:08:03.546378 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:08:03.547753 systemd-logind[2008]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:08:03.549711 systemd-logind[2008]: Removed session 7. Jul 12 00:08:03.573146 systemd[1]: Started sshd@7-172.31.28.146:22-139.178.89.65:46604.service - OpenSSH per-connection server daemon (139.178.89.65:46604). Jul 12 00:08:03.749462 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 46604 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:03.752173 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:03.759494 systemd-logind[2008]: New session 8 of user core. Jul 12 00:08:03.767850 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:08:03.872435 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:08:03.873092 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:03.878972 sudo[2328]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:03.889019 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:08:03.889666 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:03.910126 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:08:03.925419 auditctl[2331]: No rules Jul 12 00:08:03.926206 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:08:03.926587 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:03.934388 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:08:03.989152 augenrules[2350]: No rules Jul 12 00:08:03.991754 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:08:03.993626 sudo[2327]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:04.017393 sshd[2324]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:04.023419 systemd-logind[2008]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:08:04.024972 systemd[1]: sshd@7-172.31.28.146:22-139.178.89.65:46604.service: Deactivated successfully. Jul 12 00:08:04.028159 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:08:04.030219 systemd-logind[2008]: Removed session 8. Jul 12 00:08:04.050255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:08:04.062974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:04.066751 systemd[1]: Started sshd@8-172.31.28.146:22-139.178.89.65:46610.service - OpenSSH per-connection server daemon (139.178.89.65:46610). Jul 12 00:08:04.249284 sshd[2359]: Accepted publickey for core from 139.178.89.65 port 46610 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:04.252513 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:04.265832 systemd-logind[2008]: New session 9 of user core. Jul 12 00:08:04.272892 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:08:04.380666 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:08:04.381337 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:08:04.579052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:04.589464 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:04.681449 kubelet[2378]: E0712 00:08:04.681356 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:04.689334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:04.689706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:04.908125 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:08:04.921357 (dockerd)[2392]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:08:05.331644 dockerd[2392]: time="2025-07-12T00:08:05.331232075Z" level=info msg="Starting up" Jul 12 00:08:05.471873 systemd[1]: var-lib-docker-metacopy\x2dcheck2998384505-merged.mount: Deactivated successfully. Jul 12 00:08:05.485108 dockerd[2392]: time="2025-07-12T00:08:05.485045904Z" level=info msg="Loading containers: start." Jul 12 00:08:05.635642 kernel: Initializing XFRM netlink socket Jul 12 00:08:05.668782 (udev-worker)[2414]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:05.756363 systemd-networkd[1944]: docker0: Link UP Jul 12 00:08:05.775067 dockerd[2392]: time="2025-07-12T00:08:05.774988921Z" level=info msg="Loading containers: done." Jul 12 00:08:05.798871 dockerd[2392]: time="2025-07-12T00:08:05.798782450Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:08:05.799092 dockerd[2392]: time="2025-07-12T00:08:05.798952994Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:08:05.799166 dockerd[2392]: time="2025-07-12T00:08:05.799140842Z" level=info msg="Daemon has completed initialization" Jul 12 00:08:05.863766 dockerd[2392]: time="2025-07-12T00:08:05.861242510Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:08:05.863126 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:08:06.445503 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1103871701-merged.mount: Deactivated successfully. Jul 12 00:08:06.867518 containerd[2033]: time="2025-07-12T00:08:06.867445863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 00:08:07.465283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708292371.mount: Deactivated successfully. Jul 12 00:08:08.767635 containerd[2033]: time="2025-07-12T00:08:08.765922312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:08.768199 containerd[2033]: time="2025-07-12T00:08:08.767987968Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jul 12 00:08:08.769465 containerd[2033]: time="2025-07-12T00:08:08.769393972Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:08.774632 containerd[2033]: time="2025-07-12T00:08:08.774531376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:08.776978 containerd[2033]: time="2025-07-12T00:08:08.776925532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.909415529s" Jul 12 00:08:08.777479 containerd[2033]: time="2025-07-12T00:08:08.777119020Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 12 00:08:08.780353 containerd[2033]: time="2025-07-12T00:08:08.780298456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 00:08:10.384239 containerd[2033]: time="2025-07-12T00:08:10.384147808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.386346 containerd[2033]: time="2025-07-12T00:08:10.386276920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jul 12 00:08:10.387797 containerd[2033]: time="2025-07-12T00:08:10.386856244Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.392659 containerd[2033]: time="2025-07-12T00:08:10.392584228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:10.395104 containerd[2033]: time="2025-07-12T00:08:10.395053360Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.614695696s" Jul 12 00:08:10.395277 containerd[2033]: time="2025-07-12T00:08:10.395246416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 12 00:08:10.396151 containerd[2033]: time="2025-07-12T00:08:10.396111820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 00:08:11.620432 containerd[2033]: time="2025-07-12T00:08:11.620358403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:11.622492 containerd[2033]: time="2025-07-12T00:08:11.622436359Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jul 12 00:08:11.623391 containerd[2033]: time="2025-07-12T00:08:11.622908379Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:11.629711 containerd[2033]: time="2025-07-12T00:08:11.629633131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:11.633715 containerd[2033]: time="2025-07-12T00:08:11.633442783Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.237139371s" Jul 12 00:08:11.633715 containerd[2033]: time="2025-07-12T00:08:11.633508855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 12 00:08:11.634750 containerd[2033]: time="2025-07-12T00:08:11.634692175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 00:08:13.096958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305524509.mount: Deactivated successfully. Jul 12 00:08:13.642506 containerd[2033]: time="2025-07-12T00:08:13.642327945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.644159 containerd[2033]: time="2025-07-12T00:08:13.644091129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jul 12 00:08:13.645578 containerd[2033]: time="2025-07-12T00:08:13.645507213Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.649145 containerd[2033]: time="2025-07-12T00:08:13.649082673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.651361 containerd[2033]: time="2025-07-12T00:08:13.650850489Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.016099802s" Jul 12 00:08:13.651361 containerd[2033]: time="2025-07-12T00:08:13.650909097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 12 00:08:13.651963 containerd[2033]: time="2025-07-12T00:08:13.651925545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 00:08:14.240176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1561567812.mount: Deactivated successfully. Jul 12 00:08:14.760802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:08:14.770131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:15.137513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:15.150482 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:15.252887 kubelet[2663]: E0712 00:08:15.252571 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:15.259174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:15.259489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:15.689073 containerd[2033]: time="2025-07-12T00:08:15.688973375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.691383 containerd[2033]: time="2025-07-12T00:08:15.691310963Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jul 12 00:08:15.692290 containerd[2033]: time="2025-07-12T00:08:15.691780679Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.697952 containerd[2033]: time="2025-07-12T00:08:15.697899611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:15.700812 containerd[2033]: time="2025-07-12T00:08:15.700471319Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.048395174s" Jul 12 00:08:15.700812 containerd[2033]: time="2025-07-12T00:08:15.700530911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 12 00:08:15.703035 containerd[2033]: time="2025-07-12T00:08:15.702674495Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:08:16.216801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607426065.mount: Deactivated successfully. Jul 12 00:08:16.230577 containerd[2033]: time="2025-07-12T00:08:16.229050537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:16.233291 containerd[2033]: time="2025-07-12T00:08:16.233252517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 12 00:08:16.235814 containerd[2033]: time="2025-07-12T00:08:16.235752069Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:16.242339 containerd[2033]: time="2025-07-12T00:08:16.242287029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:16.243951 containerd[2033]: time="2025-07-12T00:08:16.243890613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 541.143194ms" Jul 12 00:08:16.244107 containerd[2033]: time="2025-07-12T00:08:16.243950109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:08:16.244743 containerd[2033]: time="2025-07-12T00:08:16.244574169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 00:08:16.908760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418683762.mount: Deactivated successfully. Jul 12 00:08:19.427801 containerd[2033]: time="2025-07-12T00:08:19.427714777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.473120 containerd[2033]: time="2025-07-12T00:08:19.472402838Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jul 12 00:08:19.499452 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:08:19.519431 containerd[2033]: time="2025-07-12T00:08:19.519112154Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.575445 containerd[2033]: time="2025-07-12T00:08:19.575340158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:19.578294 containerd[2033]: time="2025-07-12T00:08:19.578055914Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.333394829s" Jul 12 00:08:19.578294 containerd[2033]: time="2025-07-12T00:08:19.578134058Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 12 00:08:25.260793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:08:25.269103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:25.615056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:25.618111 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:08:25.692354 kubelet[2761]: E0712 00:08:25.692268 2761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:08:25.697593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:08:25.698703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:08:28.107793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:28.124095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:28.184317 systemd[1]: Reloading requested from client PID 2775 ('systemctl') (unit session-9.scope)... Jul 12 00:08:28.184363 systemd[1]: Reloading... Jul 12 00:08:28.430682 zram_generator::config[2816]: No configuration found. Jul 12 00:08:28.666180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:28.837508 systemd[1]: Reloading finished in 652 ms. Jul 12 00:08:28.930884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:28.931287 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:28.942969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:28.944295 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:08:28.946675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:28.955222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:29.271426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:29.289127 (kubelet)[2885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:29.363127 kubelet[2885]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:29.363127 kubelet[2885]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:08:29.363127 kubelet[2885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:29.363717 kubelet[2885]: I0712 00:08:29.363219 2885 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:08:30.863244 kubelet[2885]: I0712 00:08:30.863174 2885 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:08:30.863244 kubelet[2885]: I0712 00:08:30.863223 2885 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:08:30.863976 kubelet[2885]: I0712 00:08:30.863588 2885 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:08:30.903242 kubelet[2885]: E0712 00:08:30.903190 2885 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:08:30.906313 kubelet[2885]: I0712 00:08:30.906065 2885 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:08:30.927059 kubelet[2885]: E0712 00:08:30.926969 2885 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:08:30.927059 kubelet[2885]: I0712 00:08:30.927048 2885 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:08:30.932301 kubelet[2885]: I0712 00:08:30.932250 2885 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:08:30.932940 kubelet[2885]: I0712 00:08:30.932893 2885 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:08:30.933242 kubelet[2885]: I0712 00:08:30.932943 2885 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:08:30.933402 kubelet[2885]: I0712 00:08:30.933376 2885 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:08:30.933402 kubelet[2885]: I0712 00:08:30.933398 2885 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:08:30.933809 kubelet[2885]: I0712 00:08:30.933779 2885 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:30.940067 kubelet[2885]: I0712 00:08:30.939999 2885 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:08:30.940067 kubelet[2885]: I0712 00:08:30.940049 2885 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:08:30.941190 kubelet[2885]: I0712 00:08:30.941054 2885 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:08:30.943532 kubelet[2885]: I0712 00:08:30.943501 2885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:08:30.947638 kubelet[2885]: I0712 00:08:30.946977 2885 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:08:30.948292 kubelet[2885]: I0712 00:08:30.948246 2885 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:08:30.948527 kubelet[2885]: W0712 00:08:30.948494 2885 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:08:30.955068 kubelet[2885]: I0712 00:08:30.954995 2885 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:08:30.955068 kubelet[2885]: I0712 00:08:30.955070 2885 server.go:1289] "Started kubelet" Jul 12 00:08:30.955435 kubelet[2885]: E0712 00:08:30.955352 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-146&limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:08:30.960970 kubelet[2885]: E0712 00:08:30.959641 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:08:30.960970 kubelet[2885]: I0712 00:08:30.959850 2885 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:08:30.962664 kubelet[2885]: I0712 00:08:30.962494 2885 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:08:30.964648 kubelet[2885]: I0712 00:08:30.963117 2885 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:08:30.964648 kubelet[2885]: I0712 00:08:30.964534 2885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:08:30.971585 kubelet[2885]: E0712 00:08:30.969058 2885 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-146.18515862b2e1f95f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-146,UID:ip-172-31-28-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-146,},FirstTimestamp:2025-07-12 00:08:30.955026783 +0000 UTC m=+1.658602078,LastTimestamp:2025-07-12 00:08:30.955026783 +0000 UTC m=+1.658602078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-146,}" Jul 12 00:08:30.975240 kubelet[2885]: I0712 00:08:30.975190 2885 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:08:30.977744 kubelet[2885]: I0712 00:08:30.977691 2885 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:08:30.977919 kubelet[2885]: I0712 00:08:30.977712 2885 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:08:30.978352 kubelet[2885]: E0712 00:08:30.978320 2885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-146\" not found" Jul 12 00:08:30.978989 kubelet[2885]: I0712 00:08:30.978961 2885 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:08:30.979174 kubelet[2885]: I0712 00:08:30.979155 2885 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:08:30.983164 kubelet[2885]: E0712 00:08:30.983098 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:08:30.983287 kubelet[2885]: E0712 00:08:30.983257 2885 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-146?timeout=10s\": dial tcp 172.31.28.146:6443: connect: connection refused" interval="200ms" Jul 12 00:08:30.983415 kubelet[2885]: E0712 00:08:30.983374 2885 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:08:30.984722 kubelet[2885]: I0712 00:08:30.984674 2885 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:08:30.984925 kubelet[2885]: I0712 00:08:30.984825 2885 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:08:30.987185 kubelet[2885]: I0712 00:08:30.987112 2885 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:08:31.010371 kubelet[2885]: I0712 00:08:31.010289 2885 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:08:31.012595 kubelet[2885]: I0712 00:08:31.012531 2885 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:08:31.012595 kubelet[2885]: I0712 00:08:31.012579 2885 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:08:31.012595 kubelet[2885]: I0712 00:08:31.012648 2885 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:08:31.012595 kubelet[2885]: I0712 00:08:31.012665 2885 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:08:31.013032 kubelet[2885]: E0712 00:08:31.012755 2885 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:08:31.030654 kubelet[2885]: E0712 00:08:31.029746 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:08:31.039452 kubelet[2885]: I0712 00:08:31.039410 2885 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:08:31.039452 kubelet[2885]: I0712 00:08:31.039445 2885 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:08:31.039727 kubelet[2885]: I0712 00:08:31.039478 2885 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:31.044918 kubelet[2885]: I0712 00:08:31.044861 2885 policy_none.go:49] "None policy: Start" Jul 12 00:08:31.044918 kubelet[2885]: I0712 00:08:31.044910 2885 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:08:31.045086 kubelet[2885]: I0712 00:08:31.044935 2885 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:08:31.055933 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:08:31.069126 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:08:31.075808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:08:31.078784 kubelet[2885]: E0712 00:08:31.078730 2885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-146\" not found" Jul 12 00:08:31.082691 kubelet[2885]: E0712 00:08:31.082640 2885 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:08:31.083562 kubelet[2885]: I0712 00:08:31.082956 2885 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:08:31.083562 kubelet[2885]: I0712 00:08:31.082990 2885 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:08:31.083562 kubelet[2885]: I0712 00:08:31.083376 2885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:08:31.086625 kubelet[2885]: E0712 00:08:31.086549 2885 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:08:31.086791 kubelet[2885]: E0712 00:08:31.086756 2885 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-146\" not found" Jul 12 00:08:31.135256 systemd[1]: Created slice kubepods-burstable-pod138bab15fd261a7b078137864f72bc81.slice - libcontainer container kubepods-burstable-pod138bab15fd261a7b078137864f72bc81.slice. Jul 12 00:08:31.151162 kubelet[2885]: E0712 00:08:31.151078 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:31.157824 systemd[1]: Created slice kubepods-burstable-pod205441c9ccc863fe454e5a4122cbc10c.slice - libcontainer container kubepods-burstable-pod205441c9ccc863fe454e5a4122cbc10c.slice. Jul 12 00:08:31.162560 kubelet[2885]: E0712 00:08:31.162505 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:31.176942 systemd[1]: Created slice kubepods-burstable-pod44d36fbef3000211ad45dfb7dd191de9.slice - libcontainer container kubepods-burstable-pod44d36fbef3000211ad45dfb7dd191de9.slice. Jul 12 00:08:31.180019 kubelet[2885]: I0712 00:08:31.179939 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:31.180132 kubelet[2885]: I0712 00:08:31.180026 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d36fbef3000211ad45dfb7dd191de9-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-146\" (UID: \"44d36fbef3000211ad45dfb7dd191de9\") " pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:31.180132 kubelet[2885]: I0712 00:08:31.180096 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/138bab15fd261a7b078137864f72bc81-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-146\" (UID: \"138bab15fd261a7b078137864f72bc81\") " pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:31.180275 kubelet[2885]: I0712 00:08:31.180161 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:31.180275 kubelet[2885]: I0712 00:08:31.180201 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:31.180275 kubelet[2885]: I0712 00:08:31.180265 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:31.180433 kubelet[2885]: I0712 00:08:31.180335 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/138bab15fd261a7b078137864f72bc81-ca-certs\") pod \"kube-apiserver-ip-172-31-28-146\" (UID: \"138bab15fd261a7b078137864f72bc81\") " pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:31.180433 kubelet[2885]: I0712 00:08:31.180380 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/138bab15fd261a7b078137864f72bc81-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-146\" (UID: \"138bab15fd261a7b078137864f72bc81\") " pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:31.181140 kubelet[2885]: I0712 00:08:31.180443 2885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:31.182260 kubelet[2885]: E0712 00:08:31.182208 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:31.184289 kubelet[2885]: E0712 00:08:31.184220 2885 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-146?timeout=10s\": dial tcp 172.31.28.146:6443: connect: connection refused" interval="400ms" Jul 12 00:08:31.186467 kubelet[2885]: I0712 00:08:31.186418 2885 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-146" Jul 12 00:08:31.187220 kubelet[2885]: E0712 00:08:31.187127 2885 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.146:6443/api/v1/nodes\": dial tcp 172.31.28.146:6443: connect: connection refused" node="ip-172-31-28-146" Jul 12 00:08:31.390539 kubelet[2885]: I0712 00:08:31.390385 2885 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-146" Jul 12 00:08:31.391314 kubelet[2885]: E0712 00:08:31.391253 2885 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.146:6443/api/v1/nodes\": dial tcp 172.31.28.146:6443: connect: connection refused" node="ip-172-31-28-146" Jul 12 00:08:31.453900 containerd[2033]: time="2025-07-12T00:08:31.453802045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-146,Uid:138bab15fd261a7b078137864f72bc81,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:31.464059 containerd[2033]: time="2025-07-12T00:08:31.464000137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-146,Uid:205441c9ccc863fe454e5a4122cbc10c,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:31.482618 kubelet[2885]: E0712 00:08:31.482445 2885 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-146.18515862b2e1f95f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-146,UID:ip-172-31-28-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-146,},FirstTimestamp:2025-07-12 00:08:30.955026783 +0000 UTC m=+1.658602078,LastTimestamp:2025-07-12 00:08:30.955026783 +0000 UTC m=+1.658602078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-146,}" Jul 12 00:08:31.489650 containerd[2033]: time="2025-07-12T00:08:31.489264925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-146,Uid:44d36fbef3000211ad45dfb7dd191de9,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:31.585380 kubelet[2885]: E0712 00:08:31.585301 2885 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-146?timeout=10s\": dial tcp 172.31.28.146:6443: connect: connection refused" interval="800ms" Jul 12 00:08:31.793730 kubelet[2885]: I0712 00:08:31.793660 2885 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-146" Jul 12 00:08:31.794195 kubelet[2885]: E0712 00:08:31.794150 2885 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.146:6443/api/v1/nodes\": dial tcp 172.31.28.146:6443: connect: connection refused" node="ip-172-31-28-146" Jul 12 00:08:31.814854 kubelet[2885]: E0712 00:08:31.814783 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-146&limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:08:31.887024 kubelet[2885]: E0712 00:08:31.886965 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:08:31.927464 kubelet[2885]: E0712 00:08:31.927099 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:08:31.942050 kubelet[2885]: E0712 00:08:31.941980 2885 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:08:32.014083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017849647.mount: Deactivated successfully. Jul 12 00:08:32.026270 containerd[2033]: time="2025-07-12T00:08:32.026171016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:32.031232 containerd[2033]: time="2025-07-12T00:08:32.031161672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:08:32.035643 containerd[2033]: time="2025-07-12T00:08:32.033928464Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:32.041990 containerd[2033]: time="2025-07-12T00:08:32.041929428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 12 00:08:32.045908 containerd[2033]: time="2025-07-12T00:08:32.045751848Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:32.049710 containerd[2033]: time="2025-07-12T00:08:32.049656720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:08:32.052719 containerd[2033]: time="2025-07-12T00:08:32.052636296Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:32.052964 containerd[2033]: time="2025-07-12T00:08:32.052906032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.793455ms" Jul 12 00:08:32.059844 containerd[2033]: time="2025-07-12T00:08:32.059784984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:08:32.066497 containerd[2033]: time="2025-07-12T00:08:32.066437892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 612.501351ms" Jul 12 00:08:32.067966 containerd[2033]: time="2025-07-12T00:08:32.067910280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.537259ms" Jul 12 00:08:32.254415 containerd[2033]: time="2025-07-12T00:08:32.254247445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:32.256711 containerd[2033]: time="2025-07-12T00:08:32.256459825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:32.256711 containerd[2033]: time="2025-07-12T00:08:32.256659541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:32.257120 containerd[2033]: time="2025-07-12T00:08:32.256959601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:32.263219 containerd[2033]: time="2025-07-12T00:08:32.262224553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:32.263219 containerd[2033]: time="2025-07-12T00:08:32.262318297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:32.263219 containerd[2033]: time="2025-07-12T00:08:32.262352209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:32.263219 containerd[2033]: time="2025-07-12T00:08:32.262492897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:32.266359 containerd[2033]: time="2025-07-12T00:08:32.265912441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:32.266359 containerd[2033]: time="2025-07-12T00:08:32.266008813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:32.266359 containerd[2033]: time="2025-07-12T00:08:32.266055637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:32.266359 containerd[2033]: time="2025-07-12T00:08:32.266208937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:32.301320 systemd[1]: Started cri-containerd-fdf3e337cfed487fcd134dac9d381341a738623c9601d46ce37a6757928cfb12.scope - libcontainer container fdf3e337cfed487fcd134dac9d381341a738623c9601d46ce37a6757928cfb12. Jul 12 00:08:32.327822 systemd[1]: Started cri-containerd-75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2.scope - libcontainer container 75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2. Jul 12 00:08:32.339567 systemd[1]: Started cri-containerd-cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7.scope - libcontainer container cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7. Jul 12 00:08:32.386543 kubelet[2885]: E0712 00:08:32.386368 2885 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-146?timeout=10s\": dial tcp 172.31.28.146:6443: connect: connection refused" interval="1.6s" Jul 12 00:08:32.435565 containerd[2033]: time="2025-07-12T00:08:32.435468350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-146,Uid:138bab15fd261a7b078137864f72bc81,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf3e337cfed487fcd134dac9d381341a738623c9601d46ce37a6757928cfb12\"" Jul 12 00:08:32.449680 containerd[2033]: time="2025-07-12T00:08:32.449308466Z" level=info msg="CreateContainer within sandbox \"fdf3e337cfed487fcd134dac9d381341a738623c9601d46ce37a6757928cfb12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:08:32.459471 containerd[2033]: time="2025-07-12T00:08:32.459027290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-146,Uid:205441c9ccc863fe454e5a4122cbc10c,Namespace:kube-system,Attempt:0,} returns sandbox id \"75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2\"" Jul 12 00:08:32.470267 containerd[2033]: time="2025-07-12T00:08:32.469883966Z" level=info msg="CreateContainer within sandbox \"75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:08:32.476670 containerd[2033]: time="2025-07-12T00:08:32.476553002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-146,Uid:44d36fbef3000211ad45dfb7dd191de9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7\"" Jul 12 00:08:32.487648 containerd[2033]: time="2025-07-12T00:08:32.487242962Z" level=info msg="CreateContainer within sandbox \"cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:08:32.508851 containerd[2033]: time="2025-07-12T00:08:32.508719830Z" level=info msg="CreateContainer within sandbox \"fdf3e337cfed487fcd134dac9d381341a738623c9601d46ce37a6757928cfb12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bf29e08c51ddcdfc97eaea9189c0c3e9f44eafa821df8334308b75d1b329008\"" Jul 12 00:08:32.509981 containerd[2033]: time="2025-07-12T00:08:32.509927270Z" level=info msg="StartContainer for \"8bf29e08c51ddcdfc97eaea9189c0c3e9f44eafa821df8334308b75d1b329008\"" Jul 12 00:08:32.516430 containerd[2033]: time="2025-07-12T00:08:32.516370202Z" level=info msg="CreateContainer within sandbox \"75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4\"" Jul 12 00:08:32.518977 containerd[2033]: time="2025-07-12T00:08:32.517945130Z" level=info msg="StartContainer for \"a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4\"" Jul 12 00:08:32.535365 containerd[2033]: time="2025-07-12T00:08:32.535306502Z" level=info msg="CreateContainer within sandbox \"cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765\"" Jul 12 00:08:32.537632 containerd[2033]: time="2025-07-12T00:08:32.537511322Z" level=info msg="StartContainer for \"3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765\"" Jul 12 00:08:32.580036 systemd[1]: Started cri-containerd-8bf29e08c51ddcdfc97eaea9189c0c3e9f44eafa821df8334308b75d1b329008.scope - libcontainer container 8bf29e08c51ddcdfc97eaea9189c0c3e9f44eafa821df8334308b75d1b329008. Jul 12 00:08:32.601826 kubelet[2885]: I0712 00:08:32.600209 2885 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-146" Jul 12 00:08:32.601826 kubelet[2885]: E0712 00:08:32.601128 2885 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.146:6443/api/v1/nodes\": dial tcp 172.31.28.146:6443: connect: connection refused" node="ip-172-31-28-146" Jul 12 00:08:32.601458 systemd[1]: Started cri-containerd-a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4.scope - libcontainer container a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4. Jul 12 00:08:32.627907 systemd[1]: Started cri-containerd-3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765.scope - libcontainer container 3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765. Jul 12 00:08:32.742009 containerd[2033]: time="2025-07-12T00:08:32.741364683Z" level=info msg="StartContainer for \"a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4\" returns successfully" Jul 12 00:08:32.749546 containerd[2033]: time="2025-07-12T00:08:32.749134695Z" level=info msg="StartContainer for \"8bf29e08c51ddcdfc97eaea9189c0c3e9f44eafa821df8334308b75d1b329008\" returns successfully" Jul 12 00:08:32.764214 containerd[2033]: time="2025-07-12T00:08:32.764112412Z" level=info msg="StartContainer for \"3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765\" returns successfully" Jul 12 00:08:32.928643 kubelet[2885]: E0712 00:08:32.927018 2885 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:08:33.050329 kubelet[2885]: E0712 00:08:33.050158 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:33.055750 kubelet[2885]: E0712 00:08:33.054956 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:33.060629 kubelet[2885]: E0712 00:08:33.059684 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:34.025632 update_engine[2009]: I20250712 00:08:34.022639 2009 update_attempter.cc:509] Updating boot flags... Jul 12 00:08:34.066216 kubelet[2885]: E0712 00:08:34.066175 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:34.069819 kubelet[2885]: E0712 00:08:34.067225 2885 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:34.167625 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3178) Jul 12 00:08:34.205958 kubelet[2885]: I0712 00:08:34.205475 2885 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-146" Jul 12 00:08:34.551659 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3169) Jul 12 00:08:34.992831 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3169) Jul 12 00:08:37.765803 kubelet[2885]: E0712 00:08:37.765724 2885 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-146\" not found" node="ip-172-31-28-146" Jul 12 00:08:37.813391 kubelet[2885]: I0712 00:08:37.812699 2885 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-146" Jul 12 00:08:37.879300 kubelet[2885]: I0712 00:08:37.879242 2885 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:37.962246 kubelet[2885]: I0712 00:08:37.961987 2885 apiserver.go:52] "Watching apiserver" Jul 12 00:08:37.979560 kubelet[2885]: I0712 00:08:37.979498 2885 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:08:38.006626 kubelet[2885]: E0712 00:08:38.005579 2885 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-146\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:38.006626 kubelet[2885]: I0712 00:08:38.005647 2885 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:38.024573 kubelet[2885]: E0712 00:08:38.023958 2885 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-146\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:38.024573 kubelet[2885]: I0712 00:08:38.024011 2885 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:38.047045 kubelet[2885]: E0712 00:08:38.046972 2885 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-146\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:38.369587 kubelet[2885]: I0712 00:08:38.369430 2885 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:40.663393 systemd[1]: Reloading requested from client PID 3438 ('systemctl') (unit session-9.scope)... Jul 12 00:08:40.663419 systemd[1]: Reloading... Jul 12 00:08:40.843763 zram_generator::config[3484]: No configuration found. Jul 12 00:08:40.859237 kubelet[2885]: I0712 00:08:40.859045 2885 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:41.049853 kubelet[2885]: I0712 00:08:41.049757 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-146" podStartSLOduration=1.049737141 podStartE2EDuration="1.049737141s" podCreationTimestamp="2025-07-12 00:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:41.049397013 +0000 UTC m=+11.752972320" watchObservedRunningTime="2025-07-12 00:08:41.049737141 +0000 UTC m=+11.753312424" Jul 12 00:08:41.078339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:08:41.291985 systemd[1]: Reloading finished in 627 ms. Jul 12 00:08:41.366343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:41.386325 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:08:41.386827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:41.386923 systemd[1]: kubelet.service: Consumed 2.456s CPU time, 130.5M memory peak, 0B memory swap peak. Jul 12 00:08:41.393080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:08:41.740909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:08:41.749322 (kubelet)[3541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:08:41.844176 kubelet[3541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:41.844176 kubelet[3541]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:08:41.844176 kubelet[3541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:08:41.844759 kubelet[3541]: I0712 00:08:41.844339 3541 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:08:41.858816 kubelet[3541]: I0712 00:08:41.858728 3541 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:08:41.858816 kubelet[3541]: I0712 00:08:41.858771 3541 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:08:41.860680 kubelet[3541]: I0712 00:08:41.859811 3541 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:08:41.863165 kubelet[3541]: I0712 00:08:41.863131 3541 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 00:08:41.869172 kubelet[3541]: I0712 00:08:41.869122 3541 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:08:41.878835 kubelet[3541]: E0712 00:08:41.878329 3541 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:08:41.878835 kubelet[3541]: I0712 00:08:41.878384 3541 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:08:41.888373 kubelet[3541]: I0712 00:08:41.888106 3541 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:08:41.888619 kubelet[3541]: I0712 00:08:41.888546 3541 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:08:41.889137 kubelet[3541]: I0712 00:08:41.888677 3541 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:08:41.889297 kubelet[3541]: I0712 00:08:41.889153 3541 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:08:41.889297 kubelet[3541]: I0712 00:08:41.889179 3541 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:08:41.889297 kubelet[3541]: I0712 00:08:41.889258 3541 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:41.890574 kubelet[3541]: I0712 00:08:41.889497 3541 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:08:41.890574 kubelet[3541]: I0712 00:08:41.889543 3541 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:08:41.890574 kubelet[3541]: I0712 00:08:41.889666 3541 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:08:41.890574 kubelet[3541]: I0712 00:08:41.889698 3541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:08:41.896312 kubelet[3541]: I0712 00:08:41.896267 3541 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:08:41.899179 kubelet[3541]: I0712 00:08:41.897509 3541 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:08:41.907341 kubelet[3541]: I0712 00:08:41.907248 3541 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:08:41.908465 kubelet[3541]: I0712 00:08:41.908438 3541 server.go:1289] "Started kubelet" Jul 12 00:08:41.915080 kubelet[3541]: I0712 00:08:41.915004 3541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:08:41.922256 kubelet[3541]: I0712 00:08:41.922191 3541 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:08:41.937556 kubelet[3541]: I0712 00:08:41.936997 3541 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:08:41.938594 kubelet[3541]: I0712 00:08:41.934012 3541 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:08:41.940086 kubelet[3541]: I0712 00:08:41.940059 3541 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:08:41.940654 kubelet[3541]: E0712 00:08:41.940582 3541 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-146\" not found" Jul 12 00:08:41.942637 kubelet[3541]: I0712 00:08:41.942087 3541 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:08:41.943186 kubelet[3541]: I0712 00:08:41.943073 3541 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:08:41.956718 kubelet[3541]: I0712 00:08:41.924575 3541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:08:41.957160 kubelet[3541]: I0712 00:08:41.956980 3541 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:08:41.973394 kubelet[3541]: I0712 00:08:41.969297 3541 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:08:41.981117 kubelet[3541]: I0712 00:08:41.978159 3541 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:08:41.983716 kubelet[3541]: I0712 00:08:41.983663 3541 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:08:41.983716 kubelet[3541]: I0712 00:08:41.983708 3541 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:08:41.983920 kubelet[3541]: I0712 00:08:41.983743 3541 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:08:41.983920 kubelet[3541]: I0712 00:08:41.983765 3541 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:08:41.983920 kubelet[3541]: E0712 00:08:41.983833 3541 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:08:42.010363 kubelet[3541]: I0712 00:08:42.008889 3541 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:08:42.010363 kubelet[3541]: I0712 00:08:42.008929 3541 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:08:42.085143 kubelet[3541]: E0712 00:08:42.085074 3541 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:08:42.134241 kubelet[3541]: I0712 00:08:42.134164 3541 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:08:42.134241 kubelet[3541]: I0712 00:08:42.134197 3541 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:08:42.134241 kubelet[3541]: I0712 00:08:42.134234 3541 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:08:42.134542 kubelet[3541]: I0712 00:08:42.134511 3541 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:08:42.134688 kubelet[3541]: I0712 00:08:42.134543 3541 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:08:42.134688 kubelet[3541]: I0712 00:08:42.134636 3541 policy_none.go:49] "None policy: Start" Jul 12 00:08:42.134688 kubelet[3541]: I0712 00:08:42.134687 3541 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:08:42.134837 kubelet[3541]: I0712 00:08:42.134712 3541 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:08:42.134934 kubelet[3541]: I0712 00:08:42.134908 3541 state_mem.go:75] "Updated machine memory state" Jul 12 00:08:42.148173 kubelet[3541]: E0712 00:08:42.148072 3541 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:08:42.148388 kubelet[3541]: I0712 00:08:42.148353 3541 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:08:42.148529 kubelet[3541]: I0712 00:08:42.148388 3541 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:08:42.149441 kubelet[3541]: I0712 00:08:42.149297 3541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:08:42.155175 kubelet[3541]: E0712 00:08:42.152768 3541 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:08:42.263765 kubelet[3541]: I0712 00:08:42.263585 3541 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-146" Jul 12 00:08:42.278705 kubelet[3541]: I0712 00:08:42.278153 3541 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-146" Jul 12 00:08:42.278705 kubelet[3541]: I0712 00:08:42.278266 3541 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-146" Jul 12 00:08:42.287741 kubelet[3541]: I0712 00:08:42.287679 3541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.288300 kubelet[3541]: I0712 00:08:42.288272 3541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:42.290061 kubelet[3541]: I0712 00:08:42.290005 3541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:42.310862 kubelet[3541]: E0712 00:08:42.310783 3541 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-146\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.311107 kubelet[3541]: E0712 00:08:42.311031 3541 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-146\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:42.346508 kubelet[3541]: I0712 00:08:42.346029 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d36fbef3000211ad45dfb7dd191de9-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-146\" (UID: \"44d36fbef3000211ad45dfb7dd191de9\") " pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:42.346508 kubelet[3541]: I0712 00:08:42.346099 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/138bab15fd261a7b078137864f72bc81-ca-certs\") pod \"kube-apiserver-ip-172-31-28-146\" (UID: \"138bab15fd261a7b078137864f72bc81\") " pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:42.346508 kubelet[3541]: I0712 00:08:42.346141 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/138bab15fd261a7b078137864f72bc81-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-146\" (UID: \"138bab15fd261a7b078137864f72bc81\") " pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:42.346508 kubelet[3541]: I0712 00:08:42.346177 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.346508 kubelet[3541]: I0712 00:08:42.346219 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.346938 kubelet[3541]: I0712 00:08:42.346254 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/138bab15fd261a7b078137864f72bc81-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-146\" (UID: \"138bab15fd261a7b078137864f72bc81\") " pod="kube-system/kube-apiserver-ip-172-31-28-146" Jul 12 00:08:42.346938 kubelet[3541]: I0712 00:08:42.346291 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.346938 kubelet[3541]: I0712 00:08:42.346324 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.346938 kubelet[3541]: I0712 00:08:42.346371 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/205441c9ccc863fe454e5a4122cbc10c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-146\" (UID: \"205441c9ccc863fe454e5a4122cbc10c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-146" Jul 12 00:08:42.893925 kubelet[3541]: I0712 00:08:42.893784 3541 apiserver.go:52] "Watching apiserver" Jul 12 00:08:42.944003 kubelet[3541]: I0712 00:08:42.943952 3541 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:08:43.086533 kubelet[3541]: I0712 00:08:43.085064 3541 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:43.098963 kubelet[3541]: E0712 00:08:43.098879 3541 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-146\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-146" Jul 12 00:08:43.175459 kubelet[3541]: I0712 00:08:43.174410 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-146" podStartSLOduration=1.174387407 podStartE2EDuration="1.174387407s" podCreationTimestamp="2025-07-12 00:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:43.151261031 +0000 UTC m=+1.393991552" watchObservedRunningTime="2025-07-12 00:08:43.174387407 +0000 UTC m=+1.417117916" Jul 12 00:08:48.048550 kubelet[3541]: I0712 00:08:48.048410 3541 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:08:48.049796 containerd[2033]: time="2025-07-12T00:08:48.049624431Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:08:48.050311 kubelet[3541]: I0712 00:08:48.049969 3541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:08:49.072508 systemd[1]: Created slice kubepods-besteffort-pod5aed15d6_cd00_4f6a_a2f5_60ee1be31762.slice - libcontainer container kubepods-besteffort-pod5aed15d6_cd00_4f6a_a2f5_60ee1be31762.slice. Jul 12 00:08:49.090295 kubelet[3541]: I0712 00:08:49.090229 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aed15d6-cd00-4f6a-a2f5-60ee1be31762-lib-modules\") pod \"kube-proxy-kz9pv\" (UID: \"5aed15d6-cd00-4f6a-a2f5-60ee1be31762\") " pod="kube-system/kube-proxy-kz9pv" Jul 12 00:08:49.090951 kubelet[3541]: I0712 00:08:49.090302 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545qr\" (UniqueName: \"kubernetes.io/projected/5aed15d6-cd00-4f6a-a2f5-60ee1be31762-kube-api-access-545qr\") pod \"kube-proxy-kz9pv\" (UID: \"5aed15d6-cd00-4f6a-a2f5-60ee1be31762\") " pod="kube-system/kube-proxy-kz9pv" Jul 12 00:08:49.090951 kubelet[3541]: I0712 00:08:49.090352 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aed15d6-cd00-4f6a-a2f5-60ee1be31762-xtables-lock\") pod \"kube-proxy-kz9pv\" (UID: \"5aed15d6-cd00-4f6a-a2f5-60ee1be31762\") " pod="kube-system/kube-proxy-kz9pv" Jul 12 00:08:49.090951 kubelet[3541]: I0712 00:08:49.090395 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5aed15d6-cd00-4f6a-a2f5-60ee1be31762-kube-proxy\") pod \"kube-proxy-kz9pv\" (UID: \"5aed15d6-cd00-4f6a-a2f5-60ee1be31762\") " pod="kube-system/kube-proxy-kz9pv" Jul 12 00:08:49.291538 systemd[1]: Created slice kubepods-besteffort-pod0a29a11a_42b7_4b22_947e_78ebd969e53b.slice - libcontainer container kubepods-besteffort-pod0a29a11a_42b7_4b22_947e_78ebd969e53b.slice. Jul 12 00:08:49.292504 kubelet[3541]: I0712 00:08:49.292349 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67fsb\" (UniqueName: \"kubernetes.io/projected/0a29a11a-42b7-4b22-947e-78ebd969e53b-kube-api-access-67fsb\") pod \"tigera-operator-747864d56d-gt6tb\" (UID: \"0a29a11a-42b7-4b22-947e-78ebd969e53b\") " pod="tigera-operator/tigera-operator-747864d56d-gt6tb" Jul 12 00:08:49.292504 kubelet[3541]: I0712 00:08:49.292420 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a29a11a-42b7-4b22-947e-78ebd969e53b-var-lib-calico\") pod \"tigera-operator-747864d56d-gt6tb\" (UID: \"0a29a11a-42b7-4b22-947e-78ebd969e53b\") " pod="tigera-operator/tigera-operator-747864d56d-gt6tb" Jul 12 00:08:49.390446 containerd[2033]: time="2025-07-12T00:08:49.390053910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kz9pv,Uid:5aed15d6-cd00-4f6a-a2f5-60ee1be31762,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:49.450313 containerd[2033]: time="2025-07-12T00:08:49.449852754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:49.450313 containerd[2033]: time="2025-07-12T00:08:49.449954826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:49.450313 containerd[2033]: time="2025-07-12T00:08:49.450016266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.450313 containerd[2033]: time="2025-07-12T00:08:49.450196182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.487942 systemd[1]: Started cri-containerd-bd0dc61dc27fb3d6fa00bcc910f097b10f40c14f909af249b0ace750e4364219.scope - libcontainer container bd0dc61dc27fb3d6fa00bcc910f097b10f40c14f909af249b0ace750e4364219. Jul 12 00:08:49.532464 containerd[2033]: time="2025-07-12T00:08:49.532415947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kz9pv,Uid:5aed15d6-cd00-4f6a-a2f5-60ee1be31762,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0dc61dc27fb3d6fa00bcc910f097b10f40c14f909af249b0ace750e4364219\"" Jul 12 00:08:49.544259 containerd[2033]: time="2025-07-12T00:08:49.544176211Z" level=info msg="CreateContainer within sandbox \"bd0dc61dc27fb3d6fa00bcc910f097b10f40c14f909af249b0ace750e4364219\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:08:49.576484 containerd[2033]: time="2025-07-12T00:08:49.576399907Z" level=info msg="CreateContainer within sandbox \"bd0dc61dc27fb3d6fa00bcc910f097b10f40c14f909af249b0ace750e4364219\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4d5f5a8206d1af3cc36f4ec2391a9f59de0b82e0479cfa270904b3ce9638b3d\"" Jul 12 00:08:49.578095 containerd[2033]: time="2025-07-12T00:08:49.577834987Z" level=info msg="StartContainer for \"b4d5f5a8206d1af3cc36f4ec2391a9f59de0b82e0479cfa270904b3ce9638b3d\"" Jul 12 00:08:49.597657 containerd[2033]: time="2025-07-12T00:08:49.597461959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gt6tb,Uid:0a29a11a-42b7-4b22-947e-78ebd969e53b,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:08:49.635372 systemd[1]: Started cri-containerd-b4d5f5a8206d1af3cc36f4ec2391a9f59de0b82e0479cfa270904b3ce9638b3d.scope - libcontainer container b4d5f5a8206d1af3cc36f4ec2391a9f59de0b82e0479cfa270904b3ce9638b3d. Jul 12 00:08:49.663331 containerd[2033]: time="2025-07-12T00:08:49.662768131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:49.663331 containerd[2033]: time="2025-07-12T00:08:49.663034147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:49.664422 containerd[2033]: time="2025-07-12T00:08:49.663078055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.664894 containerd[2033]: time="2025-07-12T00:08:49.664668163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.715881 containerd[2033]: time="2025-07-12T00:08:49.715703468Z" level=info msg="StartContainer for \"b4d5f5a8206d1af3cc36f4ec2391a9f59de0b82e0479cfa270904b3ce9638b3d\" returns successfully" Jul 12 00:08:49.719892 systemd[1]: Started cri-containerd-0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a.scope - libcontainer container 0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a. Jul 12 00:08:49.807153 containerd[2033]: time="2025-07-12T00:08:49.806588612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gt6tb,Uid:0a29a11a-42b7-4b22-947e-78ebd969e53b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a\"" Jul 12 00:08:49.810516 containerd[2033]: time="2025-07-12T00:08:49.810469568Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:08:50.632322 kubelet[3541]: I0712 00:08:50.631549 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kz9pv" podStartSLOduration=1.631526276 podStartE2EDuration="1.631526276s" podCreationTimestamp="2025-07-12 00:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:50.135303918 +0000 UTC m=+8.378034439" watchObservedRunningTime="2025-07-12 00:08:50.631526276 +0000 UTC m=+8.874256785" Jul 12 00:08:51.025883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645906801.mount: Deactivated successfully. Jul 12 00:08:51.767749 containerd[2033]: time="2025-07-12T00:08:51.767691562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:51.770439 containerd[2033]: time="2025-07-12T00:08:51.770374210Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:08:51.773296 containerd[2033]: time="2025-07-12T00:08:51.773201158Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:51.779728 containerd[2033]: time="2025-07-12T00:08:51.779644006Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:51.781243 containerd[2033]: time="2025-07-12T00:08:51.781184086Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.970287078s" Jul 12 00:08:51.781366 containerd[2033]: time="2025-07-12T00:08:51.781241374Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:08:51.790251 containerd[2033]: time="2025-07-12T00:08:51.790192006Z" level=info msg="CreateContainer within sandbox \"0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:08:51.816251 containerd[2033]: time="2025-07-12T00:08:51.816171982Z" level=info msg="CreateContainer within sandbox \"0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e\"" Jul 12 00:08:51.817233 containerd[2033]: time="2025-07-12T00:08:51.817137550Z" level=info msg="StartContainer for \"b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e\"" Jul 12 00:08:51.882927 systemd[1]: Started cri-containerd-b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e.scope - libcontainer container b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e. Jul 12 00:08:51.930493 containerd[2033]: time="2025-07-12T00:08:51.930378539Z" level=info msg="StartContainer for \"b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e\" returns successfully" Jul 12 00:08:58.542313 sudo[2364]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:58.567366 sshd[2359]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:58.577111 systemd[1]: sshd@8-172.31.28.146:22-139.178.89.65:46610.service: Deactivated successfully. Jul 12 00:08:58.583164 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:08:58.583519 systemd[1]: session-9.scope: Consumed 12.291s CPU time, 150.7M memory peak, 0B memory swap peak. Jul 12 00:08:58.586235 systemd-logind[2008]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:08:58.590182 systemd-logind[2008]: Removed session 9. Jul 12 00:09:11.570349 kubelet[3541]: I0712 00:09:11.570226 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-gt6tb" podStartSLOduration=20.59654381 podStartE2EDuration="22.570181732s" podCreationTimestamp="2025-07-12 00:08:49 +0000 UTC" firstStartedPulling="2025-07-12 00:08:49.809350808 +0000 UTC m=+8.052081329" lastFinishedPulling="2025-07-12 00:08:51.782988742 +0000 UTC m=+10.025719251" observedRunningTime="2025-07-12 00:08:52.139060376 +0000 UTC m=+10.381790909" watchObservedRunningTime="2025-07-12 00:09:11.570181732 +0000 UTC m=+29.812912277" Jul 12 00:09:11.591553 systemd[1]: Created slice kubepods-besteffort-podb8b6887e_483b_4f56_81c8_e2e421d1587b.slice - libcontainer container kubepods-besteffort-podb8b6887e_483b_4f56_81c8_e2e421d1587b.slice. Jul 12 00:09:11.607069 kubelet[3541]: I0712 00:09:11.606875 3541 status_manager.go:895] "Failed to get status for pod" podUID="b8b6887e-483b-4f56-81c8-e2e421d1587b" pod="calico-system/calico-typha-6478c959b6-jn9sb" err="pods \"calico-typha-6478c959b6-jn9sb\" is forbidden: User \"system:node:ip-172-31-28-146\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-146' and this object" Jul 12 00:09:11.607069 kubelet[3541]: E0712 00:09:11.606897 3541 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-28-146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-146' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jul 12 00:09:11.607069 kubelet[3541]: E0712 00:09:11.607001 3541 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-28-146\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-146' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Jul 12 00:09:11.607069 kubelet[3541]: E0712 00:09:11.607020 3541 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-28-146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-146' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Jul 12 00:09:11.640933 kubelet[3541]: I0712 00:09:11.640860 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjpt\" (UniqueName: \"kubernetes.io/projected/b8b6887e-483b-4f56-81c8-e2e421d1587b-kube-api-access-mbjpt\") pod \"calico-typha-6478c959b6-jn9sb\" (UID: \"b8b6887e-483b-4f56-81c8-e2e421d1587b\") " pod="calico-system/calico-typha-6478c959b6-jn9sb" Jul 12 00:09:11.641090 kubelet[3541]: I0712 00:09:11.640940 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8b6887e-483b-4f56-81c8-e2e421d1587b-tigera-ca-bundle\") pod \"calico-typha-6478c959b6-jn9sb\" (UID: \"b8b6887e-483b-4f56-81c8-e2e421d1587b\") " pod="calico-system/calico-typha-6478c959b6-jn9sb" Jul 12 00:09:11.641090 kubelet[3541]: I0712 00:09:11.640995 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b8b6887e-483b-4f56-81c8-e2e421d1587b-typha-certs\") pod \"calico-typha-6478c959b6-jn9sb\" (UID: \"b8b6887e-483b-4f56-81c8-e2e421d1587b\") " pod="calico-system/calico-typha-6478c959b6-jn9sb" Jul 12 00:09:11.910252 systemd[1]: Created slice kubepods-besteffort-pode7a84559_3afc_44b9_b9e2_057767597fb6.slice - libcontainer container kubepods-besteffort-pode7a84559_3afc_44b9_b9e2_057767597fb6.slice. Jul 12 00:09:11.943264 kubelet[3541]: I0712 00:09:11.943195 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-var-run-calico\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943471 kubelet[3541]: I0712 00:09:11.943271 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-cni-log-dir\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943471 kubelet[3541]: I0712 00:09:11.943316 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-cni-net-dir\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943471 kubelet[3541]: I0712 00:09:11.943361 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a84559-3afc-44b9-b9e2-057767597fb6-tigera-ca-bundle\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943471 kubelet[3541]: I0712 00:09:11.943401 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-cni-bin-dir\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943471 kubelet[3541]: I0712 00:09:11.943440 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-flexvol-driver-host\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943783 kubelet[3541]: I0712 00:09:11.943474 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-policysync\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943783 kubelet[3541]: I0712 00:09:11.943511 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjvw\" (UniqueName: \"kubernetes.io/projected/e7a84559-3afc-44b9-b9e2-057767597fb6-kube-api-access-tgjvw\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943783 kubelet[3541]: I0712 00:09:11.943551 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-var-lib-calico\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943783 kubelet[3541]: I0712 00:09:11.943588 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-xtables-lock\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.943783 kubelet[3541]: I0712 00:09:11.943657 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a84559-3afc-44b9-b9e2-057767597fb6-lib-modules\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:11.944027 kubelet[3541]: I0712 00:09:11.943694 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e7a84559-3afc-44b9-b9e2-057767597fb6-node-certs\") pod \"calico-node-wpqmc\" (UID: \"e7a84559-3afc-44b9-b9e2-057767597fb6\") " pod="calico-system/calico-node-wpqmc" Jul 12 00:09:12.049247 kubelet[3541]: E0712 00:09:12.048612 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.049795 kubelet[3541]: W0712 00:09:12.049471 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.049795 kubelet[3541]: E0712 00:09:12.049532 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.052793 kubelet[3541]: E0712 00:09:12.052751 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.053165 kubelet[3541]: W0712 00:09:12.052998 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.053657 kubelet[3541]: E0712 00:09:12.053040 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.054629 kubelet[3541]: E0712 00:09:12.053855 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.054882 kubelet[3541]: W0712 00:09:12.054824 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.055343 kubelet[3541]: E0712 00:09:12.054982 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.057909 kubelet[3541]: E0712 00:09:12.057868 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.058116 kubelet[3541]: W0712 00:09:12.058088 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.058239 kubelet[3541]: E0712 00:09:12.058216 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.071753 kubelet[3541]: E0712 00:09:12.068804 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.071904 kubelet[3541]: W0712 00:09:12.071742 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.071904 kubelet[3541]: E0712 00:09:12.071825 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.197377 kubelet[3541]: E0712 00:09:12.196676 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:12.231791 kubelet[3541]: E0712 00:09:12.231545 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.231791 kubelet[3541]: W0712 00:09:12.231579 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.231791 kubelet[3541]: E0712 00:09:12.231644 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.232550 kubelet[3541]: E0712 00:09:12.232294 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.232550 kubelet[3541]: W0712 00:09:12.232319 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.232550 kubelet[3541]: E0712 00:09:12.232385 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.236681 kubelet[3541]: E0712 00:09:12.235346 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.236681 kubelet[3541]: W0712 00:09:12.235377 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.236681 kubelet[3541]: E0712 00:09:12.235406 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.237904 kubelet[3541]: E0712 00:09:12.237366 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.237904 kubelet[3541]: W0712 00:09:12.237400 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.237904 kubelet[3541]: E0712 00:09:12.237432 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.238376 kubelet[3541]: E0712 00:09:12.238241 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.238501 kubelet[3541]: W0712 00:09:12.238475 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.240712 kubelet[3541]: E0712 00:09:12.240633 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.241560 kubelet[3541]: E0712 00:09:12.241438 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.242062 kubelet[3541]: W0712 00:09:12.241821 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.242062 kubelet[3541]: E0712 00:09:12.241864 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.243257 kubelet[3541]: E0712 00:09:12.243217 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.243456 kubelet[3541]: W0712 00:09:12.243428 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.243594 kubelet[3541]: E0712 00:09:12.243568 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.244706 kubelet[3541]: E0712 00:09:12.244385 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.244706 kubelet[3541]: W0712 00:09:12.244416 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.244706 kubelet[3541]: E0712 00:09:12.244445 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.245803 kubelet[3541]: E0712 00:09:12.245762 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.246001 kubelet[3541]: W0712 00:09:12.245974 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.246653 kubelet[3541]: E0712 00:09:12.246571 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.247800 kubelet[3541]: E0712 00:09:12.247268 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.247800 kubelet[3541]: W0712 00:09:12.247299 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.247800 kubelet[3541]: E0712 00:09:12.247327 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.248957 kubelet[3541]: E0712 00:09:12.248306 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.249402 kubelet[3541]: W0712 00:09:12.249161 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.249402 kubelet[3541]: E0712 00:09:12.249208 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.249870 kubelet[3541]: E0712 00:09:12.249843 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.250149 kubelet[3541]: W0712 00:09:12.250120 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.251296 kubelet[3541]: E0712 00:09:12.250246 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.252214 kubelet[3541]: E0712 00:09:12.251936 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.252214 kubelet[3541]: W0712 00:09:12.251996 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.252214 kubelet[3541]: E0712 00:09:12.252029 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.252871 kubelet[3541]: E0712 00:09:12.252828 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.253329 kubelet[3541]: W0712 00:09:12.253057 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.253329 kubelet[3541]: E0712 00:09:12.253093 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.254049 kubelet[3541]: E0712 00:09:12.253902 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.254564 kubelet[3541]: W0712 00:09:12.254207 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.254564 kubelet[3541]: E0712 00:09:12.254248 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.255805 kubelet[3541]: E0712 00:09:12.255399 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.255805 kubelet[3541]: W0712 00:09:12.255430 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.255805 kubelet[3541]: E0712 00:09:12.255460 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.256621 kubelet[3541]: E0712 00:09:12.256484 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.256845 kubelet[3541]: W0712 00:09:12.256773 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.257098 kubelet[3541]: E0712 00:09:12.257069 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.257984 kubelet[3541]: E0712 00:09:12.257950 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.258637 kubelet[3541]: W0712 00:09:12.258147 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.258637 kubelet[3541]: E0712 00:09:12.258403 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.259300 kubelet[3541]: E0712 00:09:12.259267 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.259734 kubelet[3541]: W0712 00:09:12.259440 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.259734 kubelet[3541]: E0712 00:09:12.259493 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.260662 kubelet[3541]: E0712 00:09:12.260525 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.261120 kubelet[3541]: W0712 00:09:12.260939 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.262665 kubelet[3541]: E0712 00:09:12.260984 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.263779 kubelet[3541]: E0712 00:09:12.263342 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.263779 kubelet[3541]: W0712 00:09:12.263386 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.263779 kubelet[3541]: E0712 00:09:12.263428 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.263779 kubelet[3541]: I0712 00:09:12.263485 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2cfa5752-993e-4842-a5b4-cf0d08ec1a3c-varrun\") pod \"csi-node-driver-tkbd2\" (UID: \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\") " pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:12.264260 kubelet[3541]: E0712 00:09:12.264229 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.264386 kubelet[3541]: W0712 00:09:12.264360 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.264624 kubelet[3541]: E0712 00:09:12.264514 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.265418 kubelet[3541]: I0712 00:09:12.264755 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb665\" (UniqueName: \"kubernetes.io/projected/2cfa5752-993e-4842-a5b4-cf0d08ec1a3c-kube-api-access-jb665\") pod \"csi-node-driver-tkbd2\" (UID: \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\") " pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:12.265845 kubelet[3541]: E0712 00:09:12.265561 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.266253 kubelet[3541]: W0712 00:09:12.265958 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.266253 kubelet[3541]: E0712 00:09:12.265997 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.269803 kubelet[3541]: E0712 00:09:12.266952 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.269803 kubelet[3541]: W0712 00:09:12.266984 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.269803 kubelet[3541]: E0712 00:09:12.267014 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.269803 kubelet[3541]: E0712 00:09:12.269114 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.269803 kubelet[3541]: W0712 00:09:12.269146 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.269803 kubelet[3541]: E0712 00:09:12.269178 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.269803 kubelet[3541]: E0712 00:09:12.269554 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.269803 kubelet[3541]: W0712 00:09:12.269571 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.269803 kubelet[3541]: E0712 00:09:12.269619 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.270339 kubelet[3541]: E0712 00:09:12.269948 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.270339 kubelet[3541]: W0712 00:09:12.269967 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.270339 kubelet[3541]: E0712 00:09:12.269987 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.270339 kubelet[3541]: I0712 00:09:12.270026 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2cfa5752-993e-4842-a5b4-cf0d08ec1a3c-registration-dir\") pod \"csi-node-driver-tkbd2\" (UID: \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\") " pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:12.274016 kubelet[3541]: E0712 00:09:12.273962 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.274016 kubelet[3541]: W0712 00:09:12.274003 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.274246 kubelet[3541]: E0712 00:09:12.274039 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.274246 kubelet[3541]: I0712 00:09:12.274083 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2cfa5752-993e-4842-a5b4-cf0d08ec1a3c-socket-dir\") pod \"csi-node-driver-tkbd2\" (UID: \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\") " pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:12.278635 kubelet[3541]: E0712 00:09:12.277735 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.278635 kubelet[3541]: W0712 00:09:12.277782 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.278635 kubelet[3541]: E0712 00:09:12.277815 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.278635 kubelet[3541]: I0712 00:09:12.277857 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cfa5752-993e-4842-a5b4-cf0d08ec1a3c-kubelet-dir\") pod \"csi-node-driver-tkbd2\" (UID: \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\") " pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:12.280024 kubelet[3541]: E0712 00:09:12.279956 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.280024 kubelet[3541]: W0712 00:09:12.280003 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.280186 kubelet[3541]: E0712 00:09:12.280037 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.280520 kubelet[3541]: E0712 00:09:12.280474 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.280520 kubelet[3541]: W0712 00:09:12.280504 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.280699 kubelet[3541]: E0712 00:09:12.280527 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.283986 kubelet[3541]: E0712 00:09:12.283929 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.283986 kubelet[3541]: W0712 00:09:12.283972 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.284190 kubelet[3541]: E0712 00:09:12.284007 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.287863 kubelet[3541]: E0712 00:09:12.287303 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.287863 kubelet[3541]: W0712 00:09:12.287342 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.287863 kubelet[3541]: E0712 00:09:12.287375 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.291868 kubelet[3541]: E0712 00:09:12.291817 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.291868 kubelet[3541]: W0712 00:09:12.291853 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.291868 kubelet[3541]: E0712 00:09:12.291887 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.293106 kubelet[3541]: E0712 00:09:12.293058 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.293106 kubelet[3541]: W0712 00:09:12.293095 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.293106 kubelet[3541]: E0712 00:09:12.293126 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.380664 kubelet[3541]: E0712 00:09:12.380355 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.380664 kubelet[3541]: W0712 00:09:12.380389 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.380664 kubelet[3541]: E0712 00:09:12.380425 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.381315 kubelet[3541]: E0712 00:09:12.381285 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.381631 kubelet[3541]: W0712 00:09:12.381448 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.381631 kubelet[3541]: E0712 00:09:12.381483 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.382737 kubelet[3541]: E0712 00:09:12.382430 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.382737 kubelet[3541]: W0712 00:09:12.382462 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.382737 kubelet[3541]: E0712 00:09:12.382491 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.383283 kubelet[3541]: E0712 00:09:12.383254 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.383412 kubelet[3541]: W0712 00:09:12.383386 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.383763 kubelet[3541]: E0712 00:09:12.383525 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.384157 kubelet[3541]: E0712 00:09:12.384129 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.384481 kubelet[3541]: W0712 00:09:12.384271 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.384481 kubelet[3541]: E0712 00:09:12.384308 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.385266 kubelet[3541]: E0712 00:09:12.385230 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.385581 kubelet[3541]: W0712 00:09:12.385425 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.385581 kubelet[3541]: E0712 00:09:12.385464 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.386846 kubelet[3541]: E0712 00:09:12.386772 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.386846 kubelet[3541]: W0712 00:09:12.386813 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.386846 kubelet[3541]: E0712 00:09:12.386846 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.387779 kubelet[3541]: E0712 00:09:12.387732 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.387779 kubelet[3541]: W0712 00:09:12.387767 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.388459 kubelet[3541]: E0712 00:09:12.387799 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.390116 kubelet[3541]: E0712 00:09:12.390068 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.390116 kubelet[3541]: W0712 00:09:12.390108 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.390302 kubelet[3541]: E0712 00:09:12.390142 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.391548 kubelet[3541]: E0712 00:09:12.391331 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.391548 kubelet[3541]: W0712 00:09:12.391531 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.391756 kubelet[3541]: E0712 00:09:12.391565 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.393499 kubelet[3541]: E0712 00:09:12.393310 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.393499 kubelet[3541]: W0712 00:09:12.393346 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.393499 kubelet[3541]: E0712 00:09:12.393408 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.394073 kubelet[3541]: E0712 00:09:12.394030 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.394073 kubelet[3541]: W0712 00:09:12.394067 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.394073 kubelet[3541]: E0712 00:09:12.394094 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.394914 kubelet[3541]: E0712 00:09:12.394804 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.394914 kubelet[3541]: W0712 00:09:12.394840 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.394914 kubelet[3541]: E0712 00:09:12.394869 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.395864 kubelet[3541]: E0712 00:09:12.395254 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.395864 kubelet[3541]: W0712 00:09:12.395272 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.395864 kubelet[3541]: E0712 00:09:12.395294 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.396844 kubelet[3541]: E0712 00:09:12.396535 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.396844 kubelet[3541]: W0712 00:09:12.396663 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.396844 kubelet[3541]: E0712 00:09:12.396699 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.397391 kubelet[3541]: E0712 00:09:12.397246 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.397391 kubelet[3541]: W0712 00:09:12.397269 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.397391 kubelet[3541]: E0712 00:09:12.397293 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.397949 kubelet[3541]: E0712 00:09:12.397646 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.397949 kubelet[3541]: W0712 00:09:12.397675 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.397949 kubelet[3541]: E0712 00:09:12.397718 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.398546 kubelet[3541]: E0712 00:09:12.398320 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.398546 kubelet[3541]: W0712 00:09:12.398354 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.398546 kubelet[3541]: E0712 00:09:12.398389 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.399295 kubelet[3541]: E0712 00:09:12.399242 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.399295 kubelet[3541]: W0712 00:09:12.399279 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.399778 kubelet[3541]: E0712 00:09:12.399309 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.400202 kubelet[3541]: E0712 00:09:12.399953 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.400202 kubelet[3541]: W0712 00:09:12.399985 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.400202 kubelet[3541]: E0712 00:09:12.400013 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.402121 kubelet[3541]: E0712 00:09:12.402007 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.402121 kubelet[3541]: W0712 00:09:12.402044 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.402121 kubelet[3541]: E0712 00:09:12.402077 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.403276 kubelet[3541]: E0712 00:09:12.402523 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.403276 kubelet[3541]: W0712 00:09:12.402545 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.403276 kubelet[3541]: E0712 00:09:12.402567 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.403276 kubelet[3541]: E0712 00:09:12.402949 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.403276 kubelet[3541]: W0712 00:09:12.402969 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.403276 kubelet[3541]: E0712 00:09:12.402991 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.403872 kubelet[3541]: E0712 00:09:12.403340 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.403872 kubelet[3541]: W0712 00:09:12.403359 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.403872 kubelet[3541]: E0712 00:09:12.403380 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.403872 kubelet[3541]: E0712 00:09:12.403764 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.403872 kubelet[3541]: W0712 00:09:12.403782 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.403872 kubelet[3541]: E0712 00:09:12.403805 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.676928 kubelet[3541]: E0712 00:09:12.676752 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.676928 kubelet[3541]: W0712 00:09:12.676792 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.676928 kubelet[3541]: E0712 00:09:12.676826 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.697019 kubelet[3541]: E0712 00:09:12.696980 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.697019 kubelet[3541]: W0712 00:09:12.697056 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.697019 kubelet[3541]: E0712 00:09:12.697090 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.705833 kubelet[3541]: E0712 00:09:12.705781 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.705833 kubelet[3541]: W0712 00:09:12.705822 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.706024 kubelet[3541]: E0712 00:09:12.705855 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.745642 kubelet[3541]: E0712 00:09:12.745444 3541 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 12 00:09:12.746856 kubelet[3541]: E0712 00:09:12.745832 3541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8b6887e-483b-4f56-81c8-e2e421d1587b-typha-certs podName:b8b6887e-483b-4f56-81c8-e2e421d1587b nodeName:}" failed. No retries permitted until 2025-07-12 00:09:13.24555913 +0000 UTC m=+31.488289651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/b8b6887e-483b-4f56-81c8-e2e421d1587b-typha-certs") pod "calico-typha-6478c959b6-jn9sb" (UID: "b8b6887e-483b-4f56-81c8-e2e421d1587b") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:09:12.747827 kubelet[3541]: E0712 00:09:12.747387 3541 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 12 00:09:12.750379 kubelet[3541]: E0712 00:09:12.749242 3541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8b6887e-483b-4f56-81c8-e2e421d1587b-tigera-ca-bundle podName:b8b6887e-483b-4f56-81c8-e2e421d1587b nodeName:}" failed. No retries permitted until 2025-07-12 00:09:13.248579662 +0000 UTC m=+31.491310159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/b8b6887e-483b-4f56-81c8-e2e421d1587b-tigera-ca-bundle") pod "calico-typha-6478c959b6-jn9sb" (UID: "b8b6887e-483b-4f56-81c8-e2e421d1587b") : failed to sync configmap cache: timed out waiting for the condition Jul 12 00:09:12.799011 kubelet[3541]: E0712 00:09:12.798946 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.799011 kubelet[3541]: W0712 00:09:12.798990 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.799541 kubelet[3541]: E0712 00:09:12.799026 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.800529 kubelet[3541]: E0712 00:09:12.800263 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.800529 kubelet[3541]: W0712 00:09:12.800305 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.800529 kubelet[3541]: E0712 00:09:12.800339 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.801894 kubelet[3541]: E0712 00:09:12.801842 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.801894 kubelet[3541]: W0712 00:09:12.801882 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.802131 kubelet[3541]: E0712 00:09:12.801933 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.818717 containerd[2033]: time="2025-07-12T00:09:12.817781106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wpqmc,Uid:e7a84559-3afc-44b9-b9e2-057767597fb6,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:12.895919 containerd[2033]: time="2025-07-12T00:09:12.894065959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:12.895919 containerd[2033]: time="2025-07-12T00:09:12.894170791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:12.895919 containerd[2033]: time="2025-07-12T00:09:12.894207919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:12.895919 containerd[2033]: time="2025-07-12T00:09:12.895664995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:12.902768 kubelet[3541]: E0712 00:09:12.901908 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.902768 kubelet[3541]: W0712 00:09:12.901948 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.902768 kubelet[3541]: E0712 00:09:12.901982 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.904820 kubelet[3541]: E0712 00:09:12.904691 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:12.904820 kubelet[3541]: W0712 00:09:12.904751 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:12.905702 kubelet[3541]: E0712 00:09:12.905019 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:12.942283 systemd[1]: Started cri-containerd-5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590.scope - libcontainer container 5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590. Jul 12 00:09:13.007103 kubelet[3541]: E0712 00:09:13.007030 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.007781 kubelet[3541]: W0712 00:09:13.007723 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.008070 kubelet[3541]: E0712 00:09:13.008011 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.011997 kubelet[3541]: E0712 00:09:13.011944 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.011997 kubelet[3541]: W0712 00:09:13.011983 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.012207 kubelet[3541]: E0712 00:09:13.012018 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.033434 containerd[2033]: time="2025-07-12T00:09:13.033323872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wpqmc,Uid:e7a84559-3afc-44b9-b9e2-057767597fb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\"" Jul 12 00:09:13.039694 containerd[2033]: time="2025-07-12T00:09:13.038416576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:09:13.113296 kubelet[3541]: E0712 00:09:13.113241 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.113296 kubelet[3541]: W0712 00:09:13.113281 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.113505 kubelet[3541]: E0712 00:09:13.113337 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.114203 kubelet[3541]: E0712 00:09:13.113976 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.114203 kubelet[3541]: W0712 00:09:13.114010 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.114203 kubelet[3541]: E0712 00:09:13.114059 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.215268 kubelet[3541]: E0712 00:09:13.214977 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.215268 kubelet[3541]: W0712 00:09:13.215009 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.215268 kubelet[3541]: E0712 00:09:13.215041 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.217950 kubelet[3541]: E0712 00:09:13.217787 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.217950 kubelet[3541]: W0712 00:09:13.217821 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.217950 kubelet[3541]: E0712 00:09:13.217853 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.320480 kubelet[3541]: E0712 00:09:13.320293 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.320745 kubelet[3541]: W0712 00:09:13.320564 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.321043 kubelet[3541]: E0712 00:09:13.320859 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.321927 kubelet[3541]: E0712 00:09:13.321818 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.321927 kubelet[3541]: W0712 00:09:13.321848 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.321927 kubelet[3541]: E0712 00:09:13.321893 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.324125 kubelet[3541]: E0712 00:09:13.323774 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.324125 kubelet[3541]: W0712 00:09:13.323808 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.324125 kubelet[3541]: E0712 00:09:13.323839 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.327358 kubelet[3541]: E0712 00:09:13.327041 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.327358 kubelet[3541]: W0712 00:09:13.327077 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.327358 kubelet[3541]: E0712 00:09:13.327109 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.327877 kubelet[3541]: E0712 00:09:13.327645 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.327877 kubelet[3541]: W0712 00:09:13.327668 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.327877 kubelet[3541]: E0712 00:09:13.327695 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.330413 kubelet[3541]: E0712 00:09:13.329055 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.330413 kubelet[3541]: W0712 00:09:13.329084 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.330413 kubelet[3541]: E0712 00:09:13.329109 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.332920 kubelet[3541]: E0712 00:09:13.331734 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.332920 kubelet[3541]: W0712 00:09:13.331767 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.332920 kubelet[3541]: E0712 00:09:13.331799 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.333282 kubelet[3541]: E0712 00:09:13.333255 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.333773 kubelet[3541]: W0712 00:09:13.333700 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.333773 kubelet[3541]: E0712 00:09:13.333744 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.335164 kubelet[3541]: E0712 00:09:13.334990 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.335164 kubelet[3541]: W0712 00:09:13.335022 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.335164 kubelet[3541]: E0712 00:09:13.335052 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.338125 kubelet[3541]: E0712 00:09:13.337520 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.338125 kubelet[3541]: W0712 00:09:13.337555 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.338125 kubelet[3541]: E0712 00:09:13.337586 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.342487 kubelet[3541]: E0712 00:09:13.342341 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.342487 kubelet[3541]: W0712 00:09:13.342376 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.342487 kubelet[3541]: E0712 00:09:13.342408 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.352673 kubelet[3541]: E0712 00:09:13.352000 3541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:09:13.352673 kubelet[3541]: W0712 00:09:13.352034 3541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:09:13.352673 kubelet[3541]: E0712 00:09:13.352091 3541 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:09:13.409932 containerd[2033]: time="2025-07-12T00:09:13.409322321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6478c959b6-jn9sb,Uid:b8b6887e-483b-4f56-81c8-e2e421d1587b,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:13.496510 containerd[2033]: time="2025-07-12T00:09:13.496242918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:13.496510 containerd[2033]: time="2025-07-12T00:09:13.496356330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:13.497435 containerd[2033]: time="2025-07-12T00:09:13.496451334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:13.498307 containerd[2033]: time="2025-07-12T00:09:13.498120090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:13.560427 systemd[1]: Started cri-containerd-7a7dbf82d196a6e7c409633348bed1c771d3c53fd408c2fa441145aa85af9cb3.scope - libcontainer container 7a7dbf82d196a6e7c409633348bed1c771d3c53fd408c2fa441145aa85af9cb3. Jul 12 00:09:13.663436 containerd[2033]: time="2025-07-12T00:09:13.663301735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6478c959b6-jn9sb,Uid:b8b6887e-483b-4f56-81c8-e2e421d1587b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a7dbf82d196a6e7c409633348bed1c771d3c53fd408c2fa441145aa85af9cb3\"" Jul 12 00:09:13.985618 kubelet[3541]: E0712 00:09:13.984750 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:14.286556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194399368.mount: Deactivated successfully. Jul 12 00:09:14.420736 containerd[2033]: time="2025-07-12T00:09:14.420672354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.422426 containerd[2033]: time="2025-07-12T00:09:14.422368878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 12 00:09:14.425287 containerd[2033]: time="2025-07-12T00:09:14.425196690Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.433000 containerd[2033]: time="2025-07-12T00:09:14.432893503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:14.434987 containerd[2033]: time="2025-07-12T00:09:14.434641939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.396144435s" Jul 12 00:09:14.434987 containerd[2033]: time="2025-07-12T00:09:14.434711035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:09:14.438377 containerd[2033]: time="2025-07-12T00:09:14.437506411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:09:14.444937 containerd[2033]: time="2025-07-12T00:09:14.444720655Z" level=info msg="CreateContainer within sandbox \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:09:14.482172 containerd[2033]: time="2025-07-12T00:09:14.482103223Z" level=info msg="CreateContainer within sandbox \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b\"" Jul 12 00:09:14.483321 containerd[2033]: time="2025-07-12T00:09:14.483250927Z" level=info msg="StartContainer for \"77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b\"" Jul 12 00:09:14.549898 systemd[1]: Started cri-containerd-77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b.scope - libcontainer container 77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b. Jul 12 00:09:14.612846 containerd[2033]: time="2025-07-12T00:09:14.612741283Z" level=info msg="StartContainer for \"77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b\" returns successfully" Jul 12 00:09:14.642017 systemd[1]: cri-containerd-77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b.scope: Deactivated successfully. Jul 12 00:09:14.823156 containerd[2033]: time="2025-07-12T00:09:14.822739916Z" level=info msg="shim disconnected" id=77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b namespace=k8s.io Jul 12 00:09:14.823156 containerd[2033]: time="2025-07-12T00:09:14.822812312Z" level=warning msg="cleaning up after shim disconnected" id=77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b namespace=k8s.io Jul 12 00:09:14.823156 containerd[2033]: time="2025-07-12T00:09:14.822832436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:15.241420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e14d1425acbd33af807b4f88984b9d3210098445bb79a4a0398ea9f45c6f5b-rootfs.mount: Deactivated successfully. Jul 12 00:09:15.991394 kubelet[3541]: E0712 00:09:15.991277 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:16.352168 containerd[2033]: time="2025-07-12T00:09:16.352091528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:16.354314 containerd[2033]: time="2025-07-12T00:09:16.354251912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 12 00:09:16.355558 containerd[2033]: time="2025-07-12T00:09:16.355469984Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:16.361256 containerd[2033]: time="2025-07-12T00:09:16.361198076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:16.363102 containerd[2033]: time="2025-07-12T00:09:16.362746628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.925169849s" Jul 12 00:09:16.363102 containerd[2033]: time="2025-07-12T00:09:16.362798984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:09:16.365471 containerd[2033]: time="2025-07-12T00:09:16.364752008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:09:16.394827 containerd[2033]: time="2025-07-12T00:09:16.394760828Z" level=info msg="CreateContainer within sandbox \"7a7dbf82d196a6e7c409633348bed1c771d3c53fd408c2fa441145aa85af9cb3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:09:16.410525 containerd[2033]: time="2025-07-12T00:09:16.410404448Z" level=info msg="CreateContainer within sandbox \"7a7dbf82d196a6e7c409633348bed1c771d3c53fd408c2fa441145aa85af9cb3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1ba8473e79373e3cffcd067f9dd5568b6223259f826705177218b9390ef2c869\"" Jul 12 00:09:16.415658 containerd[2033]: time="2025-07-12T00:09:16.412386032Z" level=info msg="StartContainer for \"1ba8473e79373e3cffcd067f9dd5568b6223259f826705177218b9390ef2c869\"" Jul 12 00:09:16.488948 systemd[1]: Started cri-containerd-1ba8473e79373e3cffcd067f9dd5568b6223259f826705177218b9390ef2c869.scope - libcontainer container 1ba8473e79373e3cffcd067f9dd5568b6223259f826705177218b9390ef2c869. Jul 12 00:09:16.554747 containerd[2033]: time="2025-07-12T00:09:16.554308653Z" level=info msg="StartContainer for \"1ba8473e79373e3cffcd067f9dd5568b6223259f826705177218b9390ef2c869\" returns successfully" Jul 12 00:09:17.273042 kubelet[3541]: I0712 00:09:17.272874 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6478c959b6-jn9sb" podStartSLOduration=3.574528028 podStartE2EDuration="6.272831973s" podCreationTimestamp="2025-07-12 00:09:11 +0000 UTC" firstStartedPulling="2025-07-12 00:09:13.666189595 +0000 UTC m=+31.908920092" lastFinishedPulling="2025-07-12 00:09:16.364493516 +0000 UTC m=+34.607224037" observedRunningTime="2025-07-12 00:09:17.269709621 +0000 UTC m=+35.512440142" watchObservedRunningTime="2025-07-12 00:09:17.272831973 +0000 UTC m=+35.515562482" Jul 12 00:09:17.986213 kubelet[3541]: E0712 00:09:17.985947 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:19.328767 containerd[2033]: time="2025-07-12T00:09:19.328701467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:19.330268 containerd[2033]: time="2025-07-12T00:09:19.330214355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:09:19.331357 containerd[2033]: time="2025-07-12T00:09:19.331248251Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:19.336630 containerd[2033]: time="2025-07-12T00:09:19.336111287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:19.343128 containerd[2033]: time="2025-07-12T00:09:19.343036763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.978225199s" Jul 12 00:09:19.343128 containerd[2033]: time="2025-07-12T00:09:19.343124471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:09:19.351796 containerd[2033]: time="2025-07-12T00:09:19.351738623Z" level=info msg="CreateContainer within sandbox \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:09:19.375803 containerd[2033]: time="2025-07-12T00:09:19.375717311Z" level=info msg="CreateContainer within sandbox \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915\"" Jul 12 00:09:19.378194 containerd[2033]: time="2025-07-12T00:09:19.376982063Z" level=info msg="StartContainer for \"93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915\"" Jul 12 00:09:19.441993 systemd[1]: Started cri-containerd-93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915.scope - libcontainer container 93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915. Jul 12 00:09:19.500659 containerd[2033]: time="2025-07-12T00:09:19.500549520Z" level=info msg="StartContainer for \"93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915\" returns successfully" Jul 12 00:09:19.986430 kubelet[3541]: E0712 00:09:19.985102 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:20.743311 containerd[2033]: time="2025-07-12T00:09:20.742959878Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:09:20.746725 systemd[1]: cri-containerd-93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915.scope: Deactivated successfully. Jul 12 00:09:20.785973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915-rootfs.mount: Deactivated successfully. Jul 12 00:09:20.837354 kubelet[3541]: I0712 00:09:20.836989 3541 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:09:20.978408 systemd[1]: Created slice kubepods-burstable-pod69e41934_5662_47d2_a6ac_a7fd1f61f19b.slice - libcontainer container kubepods-burstable-pod69e41934_5662_47d2_a6ac_a7fd1f61f19b.slice. Jul 12 00:09:21.086687 kubelet[3541]: I0712 00:09:21.086121 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69e41934-5662-47d2-a6ac-a7fd1f61f19b-config-volume\") pod \"coredns-674b8bbfcf-68zpm\" (UID: \"69e41934-5662-47d2-a6ac-a7fd1f61f19b\") " pod="kube-system/coredns-674b8bbfcf-68zpm" Jul 12 00:09:21.086687 kubelet[3541]: I0712 00:09:21.086203 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6p5s\" (UniqueName: \"kubernetes.io/projected/69e41934-5662-47d2-a6ac-a7fd1f61f19b-kube-api-access-q6p5s\") pod \"coredns-674b8bbfcf-68zpm\" (UID: \"69e41934-5662-47d2-a6ac-a7fd1f61f19b\") " pod="kube-system/coredns-674b8bbfcf-68zpm" Jul 12 00:09:21.086687 kubelet[3541]: I0712 00:09:21.086251 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef89b6f-afd5-48ce-ab61-615671060a43-tigera-ca-bundle\") pod \"calico-kube-controllers-6799c8fbbc-xvgkw\" (UID: \"fef89b6f-afd5-48ce-ab61-615671060a43\") " pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" Jul 12 00:09:21.086687 kubelet[3541]: I0712 00:09:21.086308 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc2w6\" (UniqueName: \"kubernetes.io/projected/fef89b6f-afd5-48ce-ab61-615671060a43-kube-api-access-fc2w6\") pod \"calico-kube-controllers-6799c8fbbc-xvgkw\" (UID: \"fef89b6f-afd5-48ce-ab61-615671060a43\") " pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" Jul 12 00:09:21.098746 systemd[1]: Created slice kubepods-besteffort-podfef89b6f_afd5_48ce_ab61_615671060a43.slice - libcontainer container kubepods-besteffort-podfef89b6f_afd5_48ce_ab61_615671060a43.slice. Jul 12 00:09:21.145091 systemd[1]: Created slice kubepods-besteffort-podeb58e40d_98fa_4b77_aa58_30c336d0d01d.slice - libcontainer container kubepods-besteffort-podeb58e40d_98fa_4b77_aa58_30c336d0d01d.slice. Jul 12 00:09:21.188115 kubelet[3541]: I0712 00:09:21.186843 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb58e40d-98fa-4b77-aa58-30c336d0d01d-calico-apiserver-certs\") pod \"calico-apiserver-7f77c98ccf-gv7dm\" (UID: \"eb58e40d-98fa-4b77-aa58-30c336d0d01d\") " pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" Jul 12 00:09:21.188115 kubelet[3541]: I0712 00:09:21.186953 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8tx\" (UniqueName: \"kubernetes.io/projected/eb58e40d-98fa-4b77-aa58-30c336d0d01d-kube-api-access-kj8tx\") pod \"calico-apiserver-7f77c98ccf-gv7dm\" (UID: \"eb58e40d-98fa-4b77-aa58-30c336d0d01d\") " pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" Jul 12 00:09:21.188115 kubelet[3541]: I0712 00:09:21.186995 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e699abac-0590-4217-8cf9-599543324b2d-config-volume\") pod \"coredns-674b8bbfcf-jpq74\" (UID: \"e699abac-0590-4217-8cf9-599543324b2d\") " pod="kube-system/coredns-674b8bbfcf-jpq74" Jul 12 00:09:21.188115 kubelet[3541]: I0712 00:09:21.187032 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hgb\" (UniqueName: \"kubernetes.io/projected/e699abac-0590-4217-8cf9-599543324b2d-kube-api-access-94hgb\") pod \"coredns-674b8bbfcf-jpq74\" (UID: \"e699abac-0590-4217-8cf9-599543324b2d\") " pod="kube-system/coredns-674b8bbfcf-jpq74" Jul 12 00:09:21.210131 systemd[1]: Created slice kubepods-besteffort-podfdc693eb_7dfe_45fd_8cc7_68be5365972b.slice - libcontainer container kubepods-besteffort-podfdc693eb_7dfe_45fd_8cc7_68be5365972b.slice. Jul 12 00:09:21.280392 systemd[1]: Created slice kubepods-burstable-pode699abac_0590_4217_8cf9_599543324b2d.slice - libcontainer container kubepods-burstable-pode699abac_0590_4217_8cf9_599543324b2d.slice. Jul 12 00:09:21.294098 systemd[1]: Created slice kubepods-besteffort-pod0873ce5b_023d_4eed_b805_7f5e198e33be.slice - libcontainer container kubepods-besteffort-pod0873ce5b_023d_4eed_b805_7f5e198e33be.slice. Jul 12 00:09:21.303054 kubelet[3541]: I0712 00:09:21.287765 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-backend-key-pair\") pod \"whisker-757b9bc55c-pwn8m\" (UID: \"0873ce5b-023d-4eed-b805-7f5e198e33be\") " pod="calico-system/whisker-757b9bc55c-pwn8m" Jul 12 00:09:21.303054 kubelet[3541]: I0712 00:09:21.287835 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj6xh\" (UniqueName: \"kubernetes.io/projected/0873ce5b-023d-4eed-b805-7f5e198e33be-kube-api-access-rj6xh\") pod \"whisker-757b9bc55c-pwn8m\" (UID: \"0873ce5b-023d-4eed-b805-7f5e198e33be\") " pod="calico-system/whisker-757b9bc55c-pwn8m" Jul 12 00:09:21.303054 kubelet[3541]: I0712 00:09:21.287873 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdc693eb-7dfe-45fd-8cc7-68be5365972b-calico-apiserver-certs\") pod \"calico-apiserver-7f77c98ccf-twq6n\" (UID: \"fdc693eb-7dfe-45fd-8cc7-68be5365972b\") " pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" Jul 12 00:09:21.303054 kubelet[3541]: I0712 00:09:21.287981 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-ca-bundle\") pod \"whisker-757b9bc55c-pwn8m\" (UID: \"0873ce5b-023d-4eed-b805-7f5e198e33be\") " pod="calico-system/whisker-757b9bc55c-pwn8m" Jul 12 00:09:21.303054 kubelet[3541]: I0712 00:09:21.288024 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246ww\" (UniqueName: \"kubernetes.io/projected/fdc693eb-7dfe-45fd-8cc7-68be5365972b-kube-api-access-246ww\") pod \"calico-apiserver-7f77c98ccf-twq6n\" (UID: \"fdc693eb-7dfe-45fd-8cc7-68be5365972b\") " pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" Jul 12 00:09:21.319735 containerd[2033]: time="2025-07-12T00:09:21.317950969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68zpm,Uid:69e41934-5662-47d2-a6ac-a7fd1f61f19b,Namespace:kube-system,Attempt:0,}" Jul 12 00:09:21.374344 systemd[1]: Created slice kubepods-besteffort-pod7b617ba3_8b27_4a18_bcbf_668944552e8e.slice - libcontainer container kubepods-besteffort-pod7b617ba3_8b27_4a18_bcbf_668944552e8e.slice. Jul 12 00:09:21.390462 kubelet[3541]: I0712 00:09:21.388275 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b617ba3-8b27-4a18-bcbf-668944552e8e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-p6rbw\" (UID: \"7b617ba3-8b27-4a18-bcbf-668944552e8e\") " pod="calico-system/goldmane-768f4c5c69-p6rbw" Jul 12 00:09:21.391416 kubelet[3541]: I0712 00:09:21.391210 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b617ba3-8b27-4a18-bcbf-668944552e8e-config\") pod \"goldmane-768f4c5c69-p6rbw\" (UID: \"7b617ba3-8b27-4a18-bcbf-668944552e8e\") " pod="calico-system/goldmane-768f4c5c69-p6rbw" Jul 12 00:09:21.391416 kubelet[3541]: I0712 00:09:21.391301 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jxdq\" (UniqueName: \"kubernetes.io/projected/7b617ba3-8b27-4a18-bcbf-668944552e8e-kube-api-access-5jxdq\") pod \"goldmane-768f4c5c69-p6rbw\" (UID: \"7b617ba3-8b27-4a18-bcbf-668944552e8e\") " pod="calico-system/goldmane-768f4c5c69-p6rbw" Jul 12 00:09:21.391925 kubelet[3541]: I0712 00:09:21.391690 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7b617ba3-8b27-4a18-bcbf-668944552e8e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-p6rbw\" (UID: \"7b617ba3-8b27-4a18-bcbf-668944552e8e\") " pod="calico-system/goldmane-768f4c5c69-p6rbw" Jul 12 00:09:21.420695 containerd[2033]: time="2025-07-12T00:09:21.420194365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6799c8fbbc-xvgkw,Uid:fef89b6f-afd5-48ce-ab61-615671060a43,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:21.461215 containerd[2033]: time="2025-07-12T00:09:21.461136457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-gv7dm,Uid:eb58e40d-98fa-4b77-aa58-30c336d0d01d,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:09:21.523650 containerd[2033]: time="2025-07-12T00:09:21.523499990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-twq6n,Uid:fdc693eb-7dfe-45fd-8cc7-68be5365972b,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:09:21.604437 containerd[2033]: time="2025-07-12T00:09:21.604297730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpq74,Uid:e699abac-0590-4217-8cf9-599543324b2d,Namespace:kube-system,Attempt:0,}" Jul 12 00:09:21.608869 containerd[2033]: time="2025-07-12T00:09:21.608721986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757b9bc55c-pwn8m,Uid:0873ce5b-023d-4eed-b805-7f5e198e33be,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:21.680504 containerd[2033]: time="2025-07-12T00:09:21.679646811Z" level=info msg="shim disconnected" id=93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915 namespace=k8s.io Jul 12 00:09:21.680504 containerd[2033]: time="2025-07-12T00:09:21.679750611Z" level=warning msg="cleaning up after shim disconnected" id=93c2232807d7f57c1b121ee92f470ee024e3371838ccfb28a6ca7edfcc563915 namespace=k8s.io Jul 12 00:09:21.680504 containerd[2033]: time="2025-07-12T00:09:21.679773279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:21.684352 containerd[2033]: time="2025-07-12T00:09:21.683861547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-p6rbw,Uid:7b617ba3-8b27-4a18-bcbf-668944552e8e,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:22.016819 systemd[1]: Created slice kubepods-besteffort-pod2cfa5752_993e_4842_a5b4_cf0d08ec1a3c.slice - libcontainer container kubepods-besteffort-pod2cfa5752_993e_4842_a5b4_cf0d08ec1a3c.slice. Jul 12 00:09:22.024212 containerd[2033]: time="2025-07-12T00:09:22.023722980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkbd2,Uid:2cfa5752-993e-4842-a5b4-cf0d08ec1a3c,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:22.239984 containerd[2033]: time="2025-07-12T00:09:22.239319421Z" level=error msg="Failed to destroy network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.241068 containerd[2033]: time="2025-07-12T00:09:22.241007161Z" level=error msg="encountered an error cleaning up failed sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.243779 containerd[2033]: time="2025-07-12T00:09:22.243709621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6799c8fbbc-xvgkw,Uid:fef89b6f-afd5-48ce-ab61-615671060a43,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.244694 kubelet[3541]: E0712 00:09:22.244358 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.244694 kubelet[3541]: E0712 00:09:22.244457 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" Jul 12 00:09:22.244694 kubelet[3541]: E0712 00:09:22.244493 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" Jul 12 00:09:22.245704 kubelet[3541]: E0712 00:09:22.244571 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6799c8fbbc-xvgkw_calico-system(fef89b6f-afd5-48ce-ab61-615671060a43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6799c8fbbc-xvgkw_calico-system(fef89b6f-afd5-48ce-ab61-615671060a43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" podUID="fef89b6f-afd5-48ce-ab61-615671060a43" Jul 12 00:09:22.279363 containerd[2033]: time="2025-07-12T00:09:22.278364469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:09:22.285690 containerd[2033]: time="2025-07-12T00:09:22.284117882Z" level=error msg="Failed to destroy network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.285859 kubelet[3541]: I0712 00:09:22.284883 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:22.290862 containerd[2033]: time="2025-07-12T00:09:22.287819954Z" level=info msg="StopPodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\"" Jul 12 00:09:22.292315 containerd[2033]: time="2025-07-12T00:09:22.291899606Z" level=info msg="Ensure that sandbox 6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad in task-service has been cleanup successfully" Jul 12 00:09:22.298517 containerd[2033]: time="2025-07-12T00:09:22.298418174Z" level=error msg="encountered an error cleaning up failed sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.298679 containerd[2033]: time="2025-07-12T00:09:22.298552970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68zpm,Uid:69e41934-5662-47d2-a6ac-a7fd1f61f19b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.301431 kubelet[3541]: E0712 00:09:22.301379 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.302590 kubelet[3541]: E0712 00:09:22.302051 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-68zpm" Jul 12 00:09:22.302590 kubelet[3541]: E0712 00:09:22.302104 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-68zpm" Jul 12 00:09:22.302590 kubelet[3541]: E0712 00:09:22.302199 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-68zpm_kube-system(69e41934-5662-47d2-a6ac-a7fd1f61f19b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-68zpm_kube-system(69e41934-5662-47d2-a6ac-a7fd1f61f19b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-68zpm" podUID="69e41934-5662-47d2-a6ac-a7fd1f61f19b" Jul 12 00:09:22.369482 containerd[2033]: time="2025-07-12T00:09:22.369166094Z" level=error msg="Failed to destroy network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.370627 containerd[2033]: time="2025-07-12T00:09:22.370181354Z" level=error msg="encountered an error cleaning up failed sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.370627 containerd[2033]: time="2025-07-12T00:09:22.370278710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpq74,Uid:e699abac-0590-4217-8cf9-599543324b2d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.371684 kubelet[3541]: E0712 00:09:22.371077 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.371684 kubelet[3541]: E0712 00:09:22.371156 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jpq74" Jul 12 00:09:22.371684 kubelet[3541]: E0712 00:09:22.371192 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jpq74" Jul 12 00:09:22.371989 kubelet[3541]: E0712 00:09:22.371263 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jpq74_kube-system(e699abac-0590-4217-8cf9-599543324b2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jpq74_kube-system(e699abac-0590-4217-8cf9-599543324b2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jpq74" podUID="e699abac-0590-4217-8cf9-599543324b2d" Jul 12 00:09:22.421860 containerd[2033]: time="2025-07-12T00:09:22.421771394Z" level=error msg="Failed to destroy network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.428457 containerd[2033]: time="2025-07-12T00:09:22.428223806Z" level=error msg="encountered an error cleaning up failed sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.428457 containerd[2033]: time="2025-07-12T00:09:22.428335370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-twq6n,Uid:fdc693eb-7dfe-45fd-8cc7-68be5365972b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.429901 kubelet[3541]: E0712 00:09:22.428659 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.429901 kubelet[3541]: E0712 00:09:22.428735 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" Jul 12 00:09:22.429901 kubelet[3541]: E0712 00:09:22.428785 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" Jul 12 00:09:22.430089 kubelet[3541]: E0712 00:09:22.428864 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f77c98ccf-twq6n_calico-apiserver(fdc693eb-7dfe-45fd-8cc7-68be5365972b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f77c98ccf-twq6n_calico-apiserver(fdc693eb-7dfe-45fd-8cc7-68be5365972b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" podUID="fdc693eb-7dfe-45fd-8cc7-68be5365972b" Jul 12 00:09:22.480670 containerd[2033]: time="2025-07-12T00:09:22.479778770Z" level=error msg="Failed to destroy network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.480670 containerd[2033]: time="2025-07-12T00:09:22.480449762Z" level=error msg="encountered an error cleaning up failed sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.482390 containerd[2033]: time="2025-07-12T00:09:22.480589310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757b9bc55c-pwn8m,Uid:0873ce5b-023d-4eed-b805-7f5e198e33be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.485057 kubelet[3541]: E0712 00:09:22.484999 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.485437 kubelet[3541]: E0712 00:09:22.485398 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757b9bc55c-pwn8m" Jul 12 00:09:22.486135 kubelet[3541]: E0712 00:09:22.485547 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757b9bc55c-pwn8m" Jul 12 00:09:22.486135 kubelet[3541]: E0712 00:09:22.485673 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-757b9bc55c-pwn8m_calico-system(0873ce5b-023d-4eed-b805-7f5e198e33be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-757b9bc55c-pwn8m_calico-system(0873ce5b-023d-4eed-b805-7f5e198e33be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757b9bc55c-pwn8m" podUID="0873ce5b-023d-4eed-b805-7f5e198e33be" Jul 12 00:09:22.500205 containerd[2033]: time="2025-07-12T00:09:22.499260495Z" level=error msg="Failed to destroy network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.500595 containerd[2033]: time="2025-07-12T00:09:22.500524827Z" level=error msg="Failed to destroy network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.503184 containerd[2033]: time="2025-07-12T00:09:22.503119011Z" level=error msg="encountered an error cleaning up failed sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.503449 containerd[2033]: time="2025-07-12T00:09:22.503405919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkbd2,Uid:2cfa5752-993e-4842-a5b4-cf0d08ec1a3c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.503793 containerd[2033]: time="2025-07-12T00:09:22.503151183Z" level=error msg="encountered an error cleaning up failed sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.503887 containerd[2033]: time="2025-07-12T00:09:22.503784435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-gv7dm,Uid:eb58e40d-98fa-4b77-aa58-30c336d0d01d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.504403 kubelet[3541]: E0712 00:09:22.504130 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.504403 kubelet[3541]: E0712 00:09:22.504222 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:22.504403 kubelet[3541]: E0712 00:09:22.504255 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tkbd2" Jul 12 00:09:22.504698 kubelet[3541]: E0712 00:09:22.504341 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tkbd2_calico-system(2cfa5752-993e-4842-a5b4-cf0d08ec1a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tkbd2_calico-system(2cfa5752-993e-4842-a5b4-cf0d08ec1a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:22.505286 kubelet[3541]: E0712 00:09:22.505051 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.505286 kubelet[3541]: E0712 00:09:22.505120 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" Jul 12 00:09:22.505286 kubelet[3541]: E0712 00:09:22.505152 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" Jul 12 00:09:22.505504 kubelet[3541]: E0712 00:09:22.505219 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f77c98ccf-gv7dm_calico-apiserver(eb58e40d-98fa-4b77-aa58-30c336d0d01d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f77c98ccf-gv7dm_calico-apiserver(eb58e40d-98fa-4b77-aa58-30c336d0d01d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" podUID="eb58e40d-98fa-4b77-aa58-30c336d0d01d" Jul 12 00:09:22.520362 containerd[2033]: time="2025-07-12T00:09:22.520293003Z" level=error msg="Failed to destroy network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.521282 containerd[2033]: time="2025-07-12T00:09:22.521108919Z" level=error msg="encountered an error cleaning up failed sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.521282 containerd[2033]: time="2025-07-12T00:09:22.521209467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-p6rbw,Uid:7b617ba3-8b27-4a18-bcbf-668944552e8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.521865 kubelet[3541]: E0712 00:09:22.521809 3541 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.522067 kubelet[3541]: E0712 00:09:22.521893 3541 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-p6rbw" Jul 12 00:09:22.522067 kubelet[3541]: E0712 00:09:22.521937 3541 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-p6rbw" Jul 12 00:09:22.522067 kubelet[3541]: E0712 00:09:22.522021 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-p6rbw_calico-system(7b617ba3-8b27-4a18-bcbf-668944552e8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-p6rbw_calico-system(7b617ba3-8b27-4a18-bcbf-668944552e8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-p6rbw" podUID="7b617ba3-8b27-4a18-bcbf-668944552e8e" Jul 12 00:09:22.530853 containerd[2033]: time="2025-07-12T00:09:22.529401783Z" level=error msg="StopPodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" failed" error="failed to destroy network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:22.531007 kubelet[3541]: E0712 00:09:22.529795 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:22.531007 kubelet[3541]: E0712 00:09:22.529874 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad"} Jul 12 00:09:22.531007 kubelet[3541]: E0712 00:09:22.529957 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fef89b6f-afd5-48ce-ab61-615671060a43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:22.531007 kubelet[3541]: E0712 00:09:22.529998 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fef89b6f-afd5-48ce-ab61-615671060a43\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" podUID="fef89b6f-afd5-48ce-ab61-615671060a43" Jul 12 00:09:22.787180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72-shm.mount: Deactivated successfully. Jul 12 00:09:22.787353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0-shm.mount: Deactivated successfully. Jul 12 00:09:22.787488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb-shm.mount: Deactivated successfully. Jul 12 00:09:22.787640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac-shm.mount: Deactivated successfully. Jul 12 00:09:22.787773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792-shm.mount: Deactivated successfully. Jul 12 00:09:22.787912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad-shm.mount: Deactivated successfully. Jul 12 00:09:22.788071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8-shm.mount: Deactivated successfully. Jul 12 00:09:23.289243 kubelet[3541]: I0712 00:09:23.289204 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:23.292504 containerd[2033]: time="2025-07-12T00:09:23.292335495Z" level=info msg="StopPodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\"" Jul 12 00:09:23.294524 containerd[2033]: time="2025-07-12T00:09:23.293729391Z" level=info msg="Ensure that sandbox 3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72 in task-service has been cleanup successfully" Jul 12 00:09:23.294981 kubelet[3541]: I0712 00:09:23.294924 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:23.297623 containerd[2033]: time="2025-07-12T00:09:23.295953063Z" level=info msg="StopPodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\"" Jul 12 00:09:23.297623 containerd[2033]: time="2025-07-12T00:09:23.296269119Z" level=info msg="Ensure that sandbox b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb in task-service has been cleanup successfully" Jul 12 00:09:23.306789 kubelet[3541]: I0712 00:09:23.306752 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:23.310522 containerd[2033]: time="2025-07-12T00:09:23.310446387Z" level=info msg="StopPodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\"" Jul 12 00:09:23.311855 containerd[2033]: time="2025-07-12T00:09:23.311781855Z" level=info msg="Ensure that sandbox 01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792 in task-service has been cleanup successfully" Jul 12 00:09:23.317472 kubelet[3541]: I0712 00:09:23.317408 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:23.320243 containerd[2033]: time="2025-07-12T00:09:23.320134431Z" level=info msg="StopPodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\"" Jul 12 00:09:23.322826 containerd[2033]: time="2025-07-12T00:09:23.322674759Z" level=info msg="Ensure that sandbox 44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f in task-service has been cleanup successfully" Jul 12 00:09:23.332358 kubelet[3541]: I0712 00:09:23.332282 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:23.334030 containerd[2033]: time="2025-07-12T00:09:23.333771423Z" level=info msg="StopPodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\"" Jul 12 00:09:23.334231 containerd[2033]: time="2025-07-12T00:09:23.334067751Z" level=info msg="Ensure that sandbox 02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0 in task-service has been cleanup successfully" Jul 12 00:09:23.348890 kubelet[3541]: I0712 00:09:23.345037 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:23.351454 containerd[2033]: time="2025-07-12T00:09:23.350964435Z" level=info msg="StopPodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\"" Jul 12 00:09:23.353413 containerd[2033]: time="2025-07-12T00:09:23.353338443Z" level=info msg="Ensure that sandbox 90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac in task-service has been cleanup successfully" Jul 12 00:09:23.358480 kubelet[3541]: I0712 00:09:23.358414 3541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:23.361258 containerd[2033]: time="2025-07-12T00:09:23.361190007Z" level=info msg="StopPodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\"" Jul 12 00:09:23.362345 containerd[2033]: time="2025-07-12T00:09:23.361954335Z" level=info msg="Ensure that sandbox 843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8 in task-service has been cleanup successfully" Jul 12 00:09:23.477204 containerd[2033]: time="2025-07-12T00:09:23.477114675Z" level=error msg="StopPodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" failed" error="failed to destroy network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.477868 kubelet[3541]: E0712 00:09:23.477586 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:23.477868 kubelet[3541]: E0712 00:09:23.477711 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb"} Jul 12 00:09:23.477868 kubelet[3541]: E0712 00:09:23.477766 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e699abac-0590-4217-8cf9-599543324b2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.477868 kubelet[3541]: E0712 00:09:23.477813 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e699abac-0590-4217-8cf9-599543324b2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jpq74" podUID="e699abac-0590-4217-8cf9-599543324b2d" Jul 12 00:09:23.489810 containerd[2033]: time="2025-07-12T00:09:23.488969463Z" level=error msg="StopPodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" failed" error="failed to destroy network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.490066 kubelet[3541]: E0712 00:09:23.489340 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:23.490066 kubelet[3541]: E0712 00:09:23.489404 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792"} Jul 12 00:09:23.490066 kubelet[3541]: E0712 00:09:23.489457 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb58e40d-98fa-4b77-aa58-30c336d0d01d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.490066 kubelet[3541]: E0712 00:09:23.489514 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb58e40d-98fa-4b77-aa58-30c336d0d01d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" podUID="eb58e40d-98fa-4b77-aa58-30c336d0d01d" Jul 12 00:09:23.519249 containerd[2033]: time="2025-07-12T00:09:23.518536744Z" level=error msg="StopPodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" failed" error="failed to destroy network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.519439 kubelet[3541]: E0712 00:09:23.518946 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:23.519439 kubelet[3541]: E0712 00:09:23.519040 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f"} Jul 12 00:09:23.519439 kubelet[3541]: E0712 00:09:23.519120 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.519439 kubelet[3541]: E0712 00:09:23.519169 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tkbd2" podUID="2cfa5752-993e-4842-a5b4-cf0d08ec1a3c" Jul 12 00:09:23.546710 containerd[2033]: time="2025-07-12T00:09:23.546324832Z" level=error msg="StopPodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" failed" error="failed to destroy network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.548216 kubelet[3541]: E0712 00:09:23.547700 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:23.548216 kubelet[3541]: E0712 00:09:23.547800 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8"} Jul 12 00:09:23.548216 kubelet[3541]: E0712 00:09:23.548053 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69e41934-5662-47d2-a6ac-a7fd1f61f19b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.548216 kubelet[3541]: E0712 00:09:23.548121 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69e41934-5662-47d2-a6ac-a7fd1f61f19b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-68zpm" podUID="69e41934-5662-47d2-a6ac-a7fd1f61f19b" Jul 12 00:09:23.556921 containerd[2033]: time="2025-07-12T00:09:23.556851844Z" level=error msg="StopPodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" failed" error="failed to destroy network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.557696 kubelet[3541]: E0712 00:09:23.557409 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:23.557696 kubelet[3541]: E0712 00:09:23.557478 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0"} Jul 12 00:09:23.557696 kubelet[3541]: E0712 00:09:23.557536 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0873ce5b-023d-4eed-b805-7f5e198e33be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.557696 kubelet[3541]: E0712 00:09:23.557576 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0873ce5b-023d-4eed-b805-7f5e198e33be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757b9bc55c-pwn8m" podUID="0873ce5b-023d-4eed-b805-7f5e198e33be" Jul 12 00:09:23.559878 containerd[2033]: time="2025-07-12T00:09:23.559509136Z" level=error msg="StopPodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" failed" error="failed to destroy network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.560477 kubelet[3541]: E0712 00:09:23.560243 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:23.561787 kubelet[3541]: E0712 00:09:23.561671 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72"} Jul 12 00:09:23.561787 kubelet[3541]: E0712 00:09:23.561772 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b617ba3-8b27-4a18-bcbf-668944552e8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.562154 kubelet[3541]: E0712 00:09:23.561826 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b617ba3-8b27-4a18-bcbf-668944552e8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-p6rbw" podUID="7b617ba3-8b27-4a18-bcbf-668944552e8e" Jul 12 00:09:23.563465 containerd[2033]: time="2025-07-12T00:09:23.563375500Z" level=error msg="StopPodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" failed" error="failed to destroy network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:09:23.563821 kubelet[3541]: E0712 00:09:23.563756 3541 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:23.563900 kubelet[3541]: E0712 00:09:23.563842 3541 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac"} Jul 12 00:09:23.563955 kubelet[3541]: E0712 00:09:23.563896 3541 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdc693eb-7dfe-45fd-8cc7-68be5365972b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:09:23.564104 kubelet[3541]: E0712 00:09:23.563935 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdc693eb-7dfe-45fd-8cc7-68be5365972b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" podUID="fdc693eb-7dfe-45fd-8cc7-68be5365972b" Jul 12 00:09:28.752371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029548315.mount: Deactivated successfully. Jul 12 00:09:28.819495 containerd[2033]: time="2025-07-12T00:09:28.819206662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:28.820751 containerd[2033]: time="2025-07-12T00:09:28.820678666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:09:28.821692 containerd[2033]: time="2025-07-12T00:09:28.821578402Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:28.826081 containerd[2033]: time="2025-07-12T00:09:28.826011742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:28.827542 containerd[2033]: time="2025-07-12T00:09:28.827483542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.545258469s" Jul 12 00:09:28.827720 containerd[2033]: time="2025-07-12T00:09:28.827549782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:09:28.874378 containerd[2033]: time="2025-07-12T00:09:28.874304014Z" level=info msg="CreateContainer within sandbox \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:09:28.899392 containerd[2033]: time="2025-07-12T00:09:28.899079958Z" level=info msg="CreateContainer within sandbox \"5fc183db5467a49a0b27197dc88a10284cbcfdbb785229d698eb2db970fa2590\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0d522bfa0ddb8614f154d272d5e03fcac239571dbf96a9f06583d6a4b7854687\"" Jul 12 00:09:28.902477 containerd[2033]: time="2025-07-12T00:09:28.902378470Z" level=info msg="StartContainer for \"0d522bfa0ddb8614f154d272d5e03fcac239571dbf96a9f06583d6a4b7854687\"" Jul 12 00:09:28.952906 systemd[1]: Started cri-containerd-0d522bfa0ddb8614f154d272d5e03fcac239571dbf96a9f06583d6a4b7854687.scope - libcontainer container 0d522bfa0ddb8614f154d272d5e03fcac239571dbf96a9f06583d6a4b7854687. Jul 12 00:09:29.023826 containerd[2033]: time="2025-07-12T00:09:29.023719087Z" level=info msg="StartContainer for \"0d522bfa0ddb8614f154d272d5e03fcac239571dbf96a9f06583d6a4b7854687\" returns successfully" Jul 12 00:09:29.283726 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:09:29.283968 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:09:29.465658 kubelet[3541]: I0712 00:09:29.465537 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wpqmc" podStartSLOduration=2.672304447 podStartE2EDuration="18.465396861s" podCreationTimestamp="2025-07-12 00:09:11 +0000 UTC" firstStartedPulling="2025-07-12 00:09:13.037631344 +0000 UTC m=+31.280361841" lastFinishedPulling="2025-07-12 00:09:28.830723746 +0000 UTC m=+47.073454255" observedRunningTime="2025-07-12 00:09:29.460123065 +0000 UTC m=+47.702853586" watchObservedRunningTime="2025-07-12 00:09:29.465396861 +0000 UTC m=+47.708127406" Jul 12 00:09:29.526880 containerd[2033]: time="2025-07-12T00:09:29.526816329Z" level=info msg="StopPodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\"" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.768 [INFO][4755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.768 [INFO][4755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" iface="eth0" netns="/var/run/netns/cni-f93320f6-813c-6a05-5e4e-4124c062c337" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.769 [INFO][4755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" iface="eth0" netns="/var/run/netns/cni-f93320f6-813c-6a05-5e4e-4124c062c337" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.771 [INFO][4755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" iface="eth0" netns="/var/run/netns/cni-f93320f6-813c-6a05-5e4e-4124c062c337" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.771 [INFO][4755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.771 [INFO][4755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.881 [INFO][4773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.882 [INFO][4773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.882 [INFO][4773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.901 [WARNING][4773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.901 [INFO][4773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.904 [INFO][4773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:29.913838 containerd[2033]: 2025-07-12 00:09:29.909 [INFO][4755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:29.919967 containerd[2033]: time="2025-07-12T00:09:29.914023643Z" level=info msg="TearDown network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" successfully" Jul 12 00:09:29.919967 containerd[2033]: time="2025-07-12T00:09:29.914062187Z" level=info msg="StopPodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" returns successfully" Jul 12 00:09:29.930192 systemd[1]: run-netns-cni\x2df93320f6\x2d813c\x2d6a05\x2d5e4e\x2d4124c062c337.mount: Deactivated successfully. Jul 12 00:09:30.083842 kubelet[3541]: I0712 00:09:30.083779 3541 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-backend-key-pair\") pod \"0873ce5b-023d-4eed-b805-7f5e198e33be\" (UID: \"0873ce5b-023d-4eed-b805-7f5e198e33be\") " Jul 12 00:09:30.084009 kubelet[3541]: I0712 00:09:30.083880 3541 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj6xh\" (UniqueName: \"kubernetes.io/projected/0873ce5b-023d-4eed-b805-7f5e198e33be-kube-api-access-rj6xh\") pod \"0873ce5b-023d-4eed-b805-7f5e198e33be\" (UID: \"0873ce5b-023d-4eed-b805-7f5e198e33be\") " Jul 12 00:09:30.084009 kubelet[3541]: I0712 00:09:30.083922 3541 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-ca-bundle\") pod \"0873ce5b-023d-4eed-b805-7f5e198e33be\" (UID: \"0873ce5b-023d-4eed-b805-7f5e198e33be\") " Jul 12 00:09:30.089892 kubelet[3541]: I0712 00:09:30.084577 3541 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0873ce5b-023d-4eed-b805-7f5e198e33be" (UID: "0873ce5b-023d-4eed-b805-7f5e198e33be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:09:30.092970 kubelet[3541]: I0712 00:09:30.092901 3541 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0873ce5b-023d-4eed-b805-7f5e198e33be" (UID: "0873ce5b-023d-4eed-b805-7f5e198e33be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:09:30.097645 kubelet[3541]: I0712 00:09:30.097522 3541 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0873ce5b-023d-4eed-b805-7f5e198e33be-kube-api-access-rj6xh" (OuterVolumeSpecName: "kube-api-access-rj6xh") pod "0873ce5b-023d-4eed-b805-7f5e198e33be" (UID: "0873ce5b-023d-4eed-b805-7f5e198e33be"). InnerVolumeSpecName "kube-api-access-rj6xh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:09:30.099459 systemd[1]: var-lib-kubelet-pods-0873ce5b\x2d023d\x2d4eed\x2db805\x2d7f5e198e33be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:09:30.106337 systemd[1]: var-lib-kubelet-pods-0873ce5b\x2d023d\x2d4eed\x2db805\x2d7f5e198e33be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drj6xh.mount: Deactivated successfully. Jul 12 00:09:30.185649 kubelet[3541]: I0712 00:09:30.185456 3541 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rj6xh\" (UniqueName: \"kubernetes.io/projected/0873ce5b-023d-4eed-b805-7f5e198e33be-kube-api-access-rj6xh\") on node \"ip-172-31-28-146\" DevicePath \"\"" Jul 12 00:09:30.185649 kubelet[3541]: I0712 00:09:30.185518 3541 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-ca-bundle\") on node \"ip-172-31-28-146\" DevicePath \"\"" Jul 12 00:09:30.185649 kubelet[3541]: I0712 00:09:30.185543 3541 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0873ce5b-023d-4eed-b805-7f5e198e33be-whisker-backend-key-pair\") on node \"ip-172-31-28-146\" DevicePath \"\"" Jul 12 00:09:30.413190 systemd[1]: Removed slice kubepods-besteffort-pod0873ce5b_023d_4eed_b805_7f5e198e33be.slice - libcontainer container kubepods-besteffort-pod0873ce5b_023d_4eed_b805_7f5e198e33be.slice. Jul 12 00:09:30.566768 systemd[1]: Created slice kubepods-besteffort-pod3e111748_5a60_4173_992f_529d83967b0d.slice - libcontainer container kubepods-besteffort-pod3e111748_5a60_4173_992f_529d83967b0d.slice. Jul 12 00:09:30.689752 kubelet[3541]: I0712 00:09:30.689679 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dhtt\" (UniqueName: \"kubernetes.io/projected/3e111748-5a60-4173-992f-529d83967b0d-kube-api-access-9dhtt\") pod \"whisker-64d48ddf6c-tjv4h\" (UID: \"3e111748-5a60-4173-992f-529d83967b0d\") " pod="calico-system/whisker-64d48ddf6c-tjv4h" Jul 12 00:09:30.690309 kubelet[3541]: I0712 00:09:30.689778 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e111748-5a60-4173-992f-529d83967b0d-whisker-backend-key-pair\") pod \"whisker-64d48ddf6c-tjv4h\" (UID: \"3e111748-5a60-4173-992f-529d83967b0d\") " pod="calico-system/whisker-64d48ddf6c-tjv4h" Jul 12 00:09:30.690309 kubelet[3541]: I0712 00:09:30.689820 3541 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e111748-5a60-4173-992f-529d83967b0d-whisker-ca-bundle\") pod \"whisker-64d48ddf6c-tjv4h\" (UID: \"3e111748-5a60-4173-992f-529d83967b0d\") " pod="calico-system/whisker-64d48ddf6c-tjv4h" Jul 12 00:09:30.752477 systemd[1]: run-containerd-runc-k8s.io-0d522bfa0ddb8614f154d272d5e03fcac239571dbf96a9f06583d6a4b7854687-runc.x3YM76.mount: Deactivated successfully. Jul 12 00:09:30.874014 containerd[2033]: time="2025-07-12T00:09:30.873823428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d48ddf6c-tjv4h,Uid:3e111748-5a60-4173-992f-529d83967b0d,Namespace:calico-system,Attempt:0,}" Jul 12 00:09:31.080168 (udev-worker)[4725]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:09:31.081230 systemd-networkd[1944]: calie2038c619c6: Link UP Jul 12 00:09:31.083171 systemd-networkd[1944]: calie2038c619c6: Gained carrier Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:30.932 [INFO][4820] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:30.953 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0 whisker-64d48ddf6c- calico-system 3e111748-5a60-4173-992f-529d83967b0d 917 0 2025-07-12 00:09:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64d48ddf6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-146 whisker-64d48ddf6c-tjv4h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2038c619c6 [] [] }} ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:30.953 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.002 [INFO][4831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" HandleID="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Workload="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.003 [INFO][4831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" HandleID="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Workload="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-146", "pod":"whisker-64d48ddf6c-tjv4h", "timestamp":"2025-07-12 00:09:31.002854473 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.003 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.003 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.003 [INFO][4831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.018 [INFO][4831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.027 [INFO][4831] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.035 [INFO][4831] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.039 [INFO][4831] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.043 [INFO][4831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.043 [INFO][4831] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.046 [INFO][4831] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941 Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.055 [INFO][4831] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.065 [INFO][4831] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.1/26] block=192.168.110.0/26 handle="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.065 [INFO][4831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.1/26] handle="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" host="ip-172-31-28-146" Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.065 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:31.110670 containerd[2033]: 2025-07-12 00:09:31.065 [INFO][4831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.1/26] IPv6=[] ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" HandleID="k8s-pod-network.537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Workload="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.113116 containerd[2033]: 2025-07-12 00:09:31.069 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0", GenerateName:"whisker-64d48ddf6c-", Namespace:"calico-system", SelfLink:"", UID:"3e111748-5a60-4173-992f-529d83967b0d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64d48ddf6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"whisker-64d48ddf6c-tjv4h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2038c619c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:31.113116 containerd[2033]: 2025-07-12 00:09:31.069 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.1/32] ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.113116 containerd[2033]: 2025-07-12 00:09:31.069 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2038c619c6 ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.113116 containerd[2033]: 2025-07-12 00:09:31.084 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.113116 containerd[2033]: 2025-07-12 00:09:31.084 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0", GenerateName:"whisker-64d48ddf6c-", Namespace:"calico-system", SelfLink:"", UID:"3e111748-5a60-4173-992f-529d83967b0d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64d48ddf6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941", Pod:"whisker-64d48ddf6c-tjv4h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2038c619c6", MAC:"36:77:78:b3:99:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:31.113116 containerd[2033]: 2025-07-12 00:09:31.107 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941" Namespace="calico-system" Pod="whisker-64d48ddf6c-tjv4h" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--64d48ddf6c--tjv4h-eth0" Jul 12 00:09:31.139778 containerd[2033]: time="2025-07-12T00:09:31.139494309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:31.141588 containerd[2033]: time="2025-07-12T00:09:31.141298641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:31.141588 containerd[2033]: time="2025-07-12T00:09:31.141392661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:31.141588 containerd[2033]: time="2025-07-12T00:09:31.141579957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:31.172959 systemd[1]: Started cri-containerd-537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941.scope - libcontainer container 537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941. Jul 12 00:09:31.235004 containerd[2033]: time="2025-07-12T00:09:31.234950614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d48ddf6c-tjv4h,Uid:3e111748-5a60-4173-992f-529d83967b0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941\"" Jul 12 00:09:31.238121 containerd[2033]: time="2025-07-12T00:09:31.237794362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:09:31.990641 kubelet[3541]: I0712 00:09:31.990482 3541 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0873ce5b-023d-4eed-b805-7f5e198e33be" path="/var/lib/kubelet/pods/0873ce5b-023d-4eed-b805-7f5e198e33be/volumes" Jul 12 00:09:32.015778 kernel: bpftool[5004]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:09:32.726701 systemd-networkd[1944]: vxlan.calico: Link UP Jul 12 00:09:32.726722 systemd-networkd[1944]: vxlan.calico: Gained carrier Jul 12 00:09:32.786442 (udev-worker)[4724]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:09:32.988133 systemd-networkd[1944]: calie2038c619c6: Gained IPv6LL Jul 12 00:09:33.315402 containerd[2033]: time="2025-07-12T00:09:33.315338724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:33.317100 containerd[2033]: time="2025-07-12T00:09:33.316997004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:09:33.319144 containerd[2033]: time="2025-07-12T00:09:33.319062936Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:33.323722 containerd[2033]: time="2025-07-12T00:09:33.323621700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:33.325404 containerd[2033]: time="2025-07-12T00:09:33.325226880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.087371954s" Jul 12 00:09:33.325404 containerd[2033]: time="2025-07-12T00:09:33.325280064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:09:33.334479 containerd[2033]: time="2025-07-12T00:09:33.334427172Z" level=info msg="CreateContainer within sandbox \"537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:09:33.366331 containerd[2033]: time="2025-07-12T00:09:33.366246649Z" level=info msg="CreateContainer within sandbox \"537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cbf35dc5f0769e345fa39d5ed7411f68f7159a7e97e2c48993e4ddca19e76a67\"" Jul 12 00:09:33.368223 containerd[2033]: time="2025-07-12T00:09:33.367189393Z" level=info msg="StartContainer for \"cbf35dc5f0769e345fa39d5ed7411f68f7159a7e97e2c48993e4ddca19e76a67\"" Jul 12 00:09:33.438311 systemd[1]: run-containerd-runc-k8s.io-cbf35dc5f0769e345fa39d5ed7411f68f7159a7e97e2c48993e4ddca19e76a67-runc.ioJY4P.mount: Deactivated successfully. Jul 12 00:09:33.456061 systemd[1]: Started cri-containerd-cbf35dc5f0769e345fa39d5ed7411f68f7159a7e97e2c48993e4ddca19e76a67.scope - libcontainer container cbf35dc5f0769e345fa39d5ed7411f68f7159a7e97e2c48993e4ddca19e76a67. Jul 12 00:09:33.560032 containerd[2033]: time="2025-07-12T00:09:33.559948502Z" level=info msg="StartContainer for \"cbf35dc5f0769e345fa39d5ed7411f68f7159a7e97e2c48993e4ddca19e76a67\" returns successfully" Jul 12 00:09:33.563199 containerd[2033]: time="2025-07-12T00:09:33.562645454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:09:33.986732 containerd[2033]: time="2025-07-12T00:09:33.986655112Z" level=info msg="StopPodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\"" Jul 12 00:09:33.990526 containerd[2033]: time="2025-07-12T00:09:33.990304648Z" level=info msg="StopPodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\"" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.112 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.115 [INFO][5136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" iface="eth0" netns="/var/run/netns/cni-7d3b9f76-e30a-deac-9ec5-b74c82ac7db6" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.115 [INFO][5136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" iface="eth0" netns="/var/run/netns/cni-7d3b9f76-e30a-deac-9ec5-b74c82ac7db6" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.117 [INFO][5136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" iface="eth0" netns="/var/run/netns/cni-7d3b9f76-e30a-deac-9ec5-b74c82ac7db6" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.117 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.117 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.165 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.165 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.165 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.182 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.182 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.185 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:34.190459 containerd[2033]: 2025-07-12 00:09:34.188 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:34.193197 containerd[2033]: time="2025-07-12T00:09:34.190943377Z" level=info msg="TearDown network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" successfully" Jul 12 00:09:34.193197 containerd[2033]: time="2025-07-12T00:09:34.190985233Z" level=info msg="StopPodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" returns successfully" Jul 12 00:09:34.206177 containerd[2033]: time="2025-07-12T00:09:34.206121961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpq74,Uid:e699abac-0590-4217-8cf9-599543324b2d,Namespace:kube-system,Attempt:1,}" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.107 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.111 [INFO][5141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" iface="eth0" netns="/var/run/netns/cni-fb61808d-9b21-faab-72ae-83ddac692c97" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.112 [INFO][5141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" iface="eth0" netns="/var/run/netns/cni-fb61808d-9b21-faab-72ae-83ddac692c97" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.113 [INFO][5141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" iface="eth0" netns="/var/run/netns/cni-fb61808d-9b21-faab-72ae-83ddac692c97" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.113 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.113 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.179 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.180 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.185 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.205 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.205 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.209 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:34.216736 containerd[2033]: 2025-07-12 00:09:34.212 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:34.219332 containerd[2033]: time="2025-07-12T00:09:34.216944833Z" level=info msg="TearDown network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" successfully" Jul 12 00:09:34.219332 containerd[2033]: time="2025-07-12T00:09:34.216983317Z" level=info msg="StopPodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" returns successfully" Jul 12 00:09:34.219332 containerd[2033]: time="2025-07-12T00:09:34.218494441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68zpm,Uid:69e41934-5662-47d2-a6ac-a7fd1f61f19b,Namespace:kube-system,Attempt:1,}" Jul 12 00:09:34.364075 systemd[1]: run-netns-cni\x2d7d3b9f76\x2de30a\x2ddeac\x2d9ec5\x2db74c82ac7db6.mount: Deactivated successfully. Jul 12 00:09:34.364595 systemd[1]: run-netns-cni\x2dfb61808d\x2d9b21\x2dfaab\x2d72ae\x2d83ddac692c97.mount: Deactivated successfully. Jul 12 00:09:34.460505 systemd-networkd[1944]: vxlan.calico: Gained IPv6LL Jul 12 00:09:34.526559 systemd-networkd[1944]: cali02aab0f1e45: Link UP Jul 12 00:09:34.528194 systemd-networkd[1944]: cali02aab0f1e45: Gained carrier Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.338 [INFO][5168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0 coredns-674b8bbfcf- kube-system e699abac-0590-4217-8cf9-599543324b2d 939 0 2025-07-12 00:08:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-146 coredns-674b8bbfcf-jpq74 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02aab0f1e45 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.338 [INFO][5168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.429 [INFO][5192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" HandleID="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.429 [INFO][5192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" HandleID="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb270), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-146", "pod":"coredns-674b8bbfcf-jpq74", "timestamp":"2025-07-12 00:09:34.429286742 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.429 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.430 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.430 [INFO][5192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.447 [INFO][5192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.461 [INFO][5192] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.471 [INFO][5192] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.474 [INFO][5192] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.478 [INFO][5192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.478 [INFO][5192] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.481 [INFO][5192] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.492 [INFO][5192] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.507 [INFO][5192] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.2/26] block=192.168.110.0/26 handle="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.507 [INFO][5192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.2/26] handle="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" host="ip-172-31-28-146" Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.507 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:34.570555 containerd[2033]: 2025-07-12 00:09:34.507 [INFO][5192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.2/26] IPv6=[] ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" HandleID="k8s-pod-network.2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.573841 containerd[2033]: 2025-07-12 00:09:34.513 [INFO][5168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e699abac-0590-4217-8cf9-599543324b2d", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"coredns-674b8bbfcf-jpq74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02aab0f1e45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:34.573841 containerd[2033]: 2025-07-12 00:09:34.515 [INFO][5168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.2/32] ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.573841 containerd[2033]: 2025-07-12 00:09:34.515 [INFO][5168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02aab0f1e45 ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.573841 containerd[2033]: 2025-07-12 00:09:34.529 [INFO][5168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.573841 containerd[2033]: 2025-07-12 00:09:34.530 [INFO][5168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e699abac-0590-4217-8cf9-599543324b2d", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f", Pod:"coredns-674b8bbfcf-jpq74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02aab0f1e45", MAC:"9a:94:a0:33:30:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:34.573841 containerd[2033]: 2025-07-12 00:09:34.562 [INFO][5168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpq74" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:34.665931 systemd-networkd[1944]: caliad8fd4857e8: Link UP Jul 12 00:09:34.672057 systemd-networkd[1944]: caliad8fd4857e8: Gained carrier Jul 12 00:09:34.679209 containerd[2033]: time="2025-07-12T00:09:34.677915823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:34.679209 containerd[2033]: time="2025-07-12T00:09:34.678003939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:34.679209 containerd[2033]: time="2025-07-12T00:09:34.678029043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:34.679209 containerd[2033]: time="2025-07-12T00:09:34.678192615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.368 [INFO][5178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0 coredns-674b8bbfcf- kube-system 69e41934-5662-47d2-a6ac-a7fd1f61f19b 938 0 2025-07-12 00:08:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-146 coredns-674b8bbfcf-68zpm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad8fd4857e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.369 [INFO][5178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.435 [INFO][5197] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" HandleID="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.435 [INFO][5197] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" HandleID="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b640), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-146", "pod":"coredns-674b8bbfcf-68zpm", "timestamp":"2025-07-12 00:09:34.434631554 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.436 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.510 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.510 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.564 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.587 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.597 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.601 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.606 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.606 [INFO][5197] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.611 [INFO][5197] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54 Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.623 [INFO][5197] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.639 [INFO][5197] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.3/26] block=192.168.110.0/26 handle="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.640 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.3/26] handle="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" host="ip-172-31-28-146" Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.640 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:34.748706 containerd[2033]: 2025-07-12 00:09:34.640 [INFO][5197] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.3/26] IPv6=[] ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" HandleID="k8s-pod-network.55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.753867 containerd[2033]: 2025-07-12 00:09:34.655 [INFO][5178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69e41934-5662-47d2-a6ac-a7fd1f61f19b", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"coredns-674b8bbfcf-68zpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad8fd4857e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:34.753867 containerd[2033]: 2025-07-12 00:09:34.656 [INFO][5178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.3/32] ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.753867 containerd[2033]: 2025-07-12 00:09:34.658 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad8fd4857e8 ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.753867 containerd[2033]: 2025-07-12 00:09:34.679 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.753867 containerd[2033]: 2025-07-12 00:09:34.687 [INFO][5178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69e41934-5662-47d2-a6ac-a7fd1f61f19b", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54", Pod:"coredns-674b8bbfcf-68zpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad8fd4857e8", MAC:"96:bd:9e:ba:cc:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:34.753867 containerd[2033]: 2025-07-12 00:09:34.722 [INFO][5178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54" Namespace="kube-system" Pod="coredns-674b8bbfcf-68zpm" WorkloadEndpoint="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:34.763474 systemd[1]: Started cri-containerd-2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f.scope - libcontainer container 2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f. Jul 12 00:09:34.865724 containerd[2033]: time="2025-07-12T00:09:34.863481844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:34.865724 containerd[2033]: time="2025-07-12T00:09:34.863694136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:34.865724 containerd[2033]: time="2025-07-12T00:09:34.863814016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:34.865724 containerd[2033]: time="2025-07-12T00:09:34.863982556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:34.949159 systemd[1]: Started cri-containerd-55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54.scope - libcontainer container 55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54. Jul 12 00:09:34.963236 containerd[2033]: time="2025-07-12T00:09:34.963168856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpq74,Uid:e699abac-0590-4217-8cf9-599543324b2d,Namespace:kube-system,Attempt:1,} returns sandbox id \"2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f\"" Jul 12 00:09:35.015710 containerd[2033]: time="2025-07-12T00:09:35.014905993Z" level=info msg="StopPodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\"" Jul 12 00:09:35.025243 containerd[2033]: time="2025-07-12T00:09:35.024653005Z" level=info msg="CreateContainer within sandbox \"2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:09:35.094378 containerd[2033]: time="2025-07-12T00:09:35.093989497Z" level=info msg="CreateContainer within sandbox \"2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fe7443c7bb7f236fd09a74c96ea21606dc6dbedd6958b1cf4dbfbb3cb811178\"" Jul 12 00:09:35.096734 containerd[2033]: time="2025-07-12T00:09:35.095907373Z" level=info msg="StartContainer for \"0fe7443c7bb7f236fd09a74c96ea21606dc6dbedd6958b1cf4dbfbb3cb811178\"" Jul 12 00:09:35.108799 containerd[2033]: time="2025-07-12T00:09:35.108728821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68zpm,Uid:69e41934-5662-47d2-a6ac-a7fd1f61f19b,Namespace:kube-system,Attempt:1,} returns sandbox id \"55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54\"" Jul 12 00:09:35.124728 containerd[2033]: time="2025-07-12T00:09:35.124666045Z" level=info msg="CreateContainer within sandbox \"55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:09:35.201954 systemd[1]: Started cri-containerd-0fe7443c7bb7f236fd09a74c96ea21606dc6dbedd6958b1cf4dbfbb3cb811178.scope - libcontainer container 0fe7443c7bb7f236fd09a74c96ea21606dc6dbedd6958b1cf4dbfbb3cb811178. Jul 12 00:09:35.215260 containerd[2033]: time="2025-07-12T00:09:35.215041118Z" level=info msg="CreateContainer within sandbox \"55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc411f21a4f82b8b2f2f9644ba2a7c2bd21507d1260f40a1a56834321a2e1890\"" Jul 12 00:09:35.218763 containerd[2033]: time="2025-07-12T00:09:35.217834802Z" level=info msg="StartContainer for \"bc411f21a4f82b8b2f2f9644ba2a7c2bd21507d1260f40a1a56834321a2e1890\"" Jul 12 00:09:35.355560 systemd[1]: Started cri-containerd-bc411f21a4f82b8b2f2f9644ba2a7c2bd21507d1260f40a1a56834321a2e1890.scope - libcontainer container bc411f21a4f82b8b2f2f9644ba2a7c2bd21507d1260f40a1a56834321a2e1890. Jul 12 00:09:35.361852 containerd[2033]: time="2025-07-12T00:09:35.354205358Z" level=info msg="StartContainer for \"0fe7443c7bb7f236fd09a74c96ea21606dc6dbedd6958b1cf4dbfbb3cb811178\" returns successfully" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.269 [INFO][5310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.270 [INFO][5310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" iface="eth0" netns="/var/run/netns/cni-c85a8387-490a-4de9-8c6a-2921acad778f" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.270 [INFO][5310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" iface="eth0" netns="/var/run/netns/cni-c85a8387-490a-4de9-8c6a-2921acad778f" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.270 [INFO][5310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" iface="eth0" netns="/var/run/netns/cni-c85a8387-490a-4de9-8c6a-2921acad778f" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.270 [INFO][5310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.270 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.410 [INFO][5354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.410 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.410 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.462 [WARNING][5354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.464 [INFO][5354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.478 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:35.514765 containerd[2033]: 2025-07-12 00:09:35.495 [INFO][5310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:35.514765 containerd[2033]: time="2025-07-12T00:09:35.510882735Z" level=info msg="TearDown network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" successfully" Jul 12 00:09:35.514765 containerd[2033]: time="2025-07-12T00:09:35.510934875Z" level=info msg="StopPodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" returns successfully" Jul 12 00:09:35.517826 systemd[1]: run-netns-cni\x2dc85a8387\x2d490a\x2d4de9\x2d8c6a\x2d2921acad778f.mount: Deactivated successfully. Jul 12 00:09:35.522444 containerd[2033]: time="2025-07-12T00:09:35.521956587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-p6rbw,Uid:7b617ba3-8b27-4a18-bcbf-668944552e8e,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:35.558762 containerd[2033]: time="2025-07-12T00:09:35.558436047Z" level=info msg="StartContainer for \"bc411f21a4f82b8b2f2f9644ba2a7c2bd21507d1260f40a1a56834321a2e1890\" returns successfully" Jul 12 00:09:35.643010 kubelet[3541]: I0712 00:09:35.642106 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jpq74" podStartSLOduration=46.642084952 podStartE2EDuration="46.642084952s" podCreationTimestamp="2025-07-12 00:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:35.641443396 +0000 UTC m=+53.884174013" watchObservedRunningTime="2025-07-12 00:09:35.642084952 +0000 UTC m=+53.884815461" Jul 12 00:09:35.740023 systemd-networkd[1944]: cali02aab0f1e45: Gained IPv6LL Jul 12 00:09:35.988438 containerd[2033]: time="2025-07-12T00:09:35.986311254Z" level=info msg="StopPodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\"" Jul 12 00:09:35.993158 containerd[2033]: time="2025-07-12T00:09:35.992055990Z" level=info msg="StopPodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\"" Jul 12 00:09:36.000063 containerd[2033]: time="2025-07-12T00:09:35.999917070Z" level=info msg="StopPodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\"" Jul 12 00:09:36.123850 systemd-networkd[1944]: caliad8fd4857e8: Gained IPv6LL Jul 12 00:09:36.212872 systemd-networkd[1944]: calie4186c3056f: Link UP Jul 12 00:09:36.221010 systemd-networkd[1944]: calie4186c3056f: Gained carrier Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.770 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0 goldmane-768f4c5c69- calico-system 7b617ba3-8b27-4a18-bcbf-668944552e8e 953 0 2025-07-12 00:09:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-146 goldmane-768f4c5c69-p6rbw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie4186c3056f [] [] }} ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.772 [INFO][5401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.921 [INFO][5415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" HandleID="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.922 [INFO][5415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" HandleID="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d31b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-146", "pod":"goldmane-768f4c5c69-p6rbw", "timestamp":"2025-07-12 00:09:35.921544157 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.922 [INFO][5415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.922 [INFO][5415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.922 [INFO][5415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.965 [INFO][5415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:35.995 [INFO][5415] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.045 [INFO][5415] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.061 [INFO][5415] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.082 [INFO][5415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.084 [INFO][5415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.094 [INFO][5415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.125 [INFO][5415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.171 [INFO][5415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.4/26] block=192.168.110.0/26 handle="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.172 [INFO][5415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.4/26] handle="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" host="ip-172-31-28-146" Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.172 [INFO][5415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.413883 containerd[2033]: 2025-07-12 00:09:36.172 [INFO][5415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.4/26] IPv6=[] ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" HandleID="k8s-pod-network.07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.416574 containerd[2033]: 2025-07-12 00:09:36.194 [INFO][5401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b617ba3-8b27-4a18-bcbf-668944552e8e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"goldmane-768f4c5c69-p6rbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4186c3056f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.416574 containerd[2033]: 2025-07-12 00:09:36.195 [INFO][5401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.4/32] ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.416574 containerd[2033]: 2025-07-12 00:09:36.195 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4186c3056f ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.416574 containerd[2033]: 2025-07-12 00:09:36.238 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.416574 containerd[2033]: 2025-07-12 00:09:36.262 [INFO][5401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b617ba3-8b27-4a18-bcbf-668944552e8e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a", Pod:"goldmane-768f4c5c69-p6rbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4186c3056f", MAC:"26:e2:37:a6:f0:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:36.416574 containerd[2033]: 2025-07-12 00:09:36.396 [INFO][5401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a" Namespace="calico-system" Pod="goldmane-768f4c5c69-p6rbw" WorkloadEndpoint="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:36.584746 containerd[2033]: time="2025-07-12T00:09:36.582804209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:36.584746 containerd[2033]: time="2025-07-12T00:09:36.582900005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:36.584746 containerd[2033]: time="2025-07-12T00:09:36.582925409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.584746 containerd[2033]: time="2025-07-12T00:09:36.583087829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:36.759740 kubelet[3541]: I0712 00:09:36.758003 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-68zpm" podStartSLOduration=47.757977665 podStartE2EDuration="47.757977665s" podCreationTimestamp="2025-07-12 00:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:36.664478777 +0000 UTC m=+54.907209286" watchObservedRunningTime="2025-07-12 00:09:36.757977665 +0000 UTC m=+55.000708174" Jul 12 00:09:36.759032 systemd[1]: Started cri-containerd-07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a.scope - libcontainer container 07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a. Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.488 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.491 [INFO][5446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" iface="eth0" netns="/var/run/netns/cni-98e4aa12-39fa-cbd4-5f54-b14b5ab4e94d" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.492 [INFO][5446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" iface="eth0" netns="/var/run/netns/cni-98e4aa12-39fa-cbd4-5f54-b14b5ab4e94d" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.494 [INFO][5446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" iface="eth0" netns="/var/run/netns/cni-98e4aa12-39fa-cbd4-5f54-b14b5ab4e94d" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.494 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.494 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.859 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.861 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.864 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.951 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.951 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.962 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:36.990717 containerd[2033]: 2025-07-12 00:09:36.974 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:36.990717 containerd[2033]: time="2025-07-12T00:09:36.987916855Z" level=info msg="StopPodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\"" Jul 12 00:09:36.997033 systemd[1]: run-netns-cni\x2d98e4aa12\x2d39fa\x2dcbd4\x2d5f54\x2db14b5ab4e94d.mount: Deactivated successfully. Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.522 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.523 [INFO][5461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" iface="eth0" netns="/var/run/netns/cni-d341d174-e1b7-5812-16d6-87b3b69fcae5" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.526 [INFO][5461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" iface="eth0" netns="/var/run/netns/cni-d341d174-e1b7-5812-16d6-87b3b69fcae5" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.527 [INFO][5461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" iface="eth0" netns="/var/run/netns/cni-d341d174-e1b7-5812-16d6-87b3b69fcae5" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.527 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.527 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.858 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.864 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:36.962 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:37.030 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:37.031 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:37.049 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:37.075120 containerd[2033]: 2025-07-12 00:09:37.058 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:37.089720 containerd[2033]: time="2025-07-12T00:09:37.088703943Z" level=info msg="TearDown network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" successfully" Jul 12 00:09:37.089720 containerd[2033]: time="2025-07-12T00:09:37.088757415Z" level=info msg="StopPodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" returns successfully" Jul 12 00:09:37.092813 systemd[1]: run-netns-cni\x2dd341d174\x2de1b7\x2d5812\x2d16d6\x2d87b3b69fcae5.mount: Deactivated successfully. Jul 12 00:09:37.093530 containerd[2033]: time="2025-07-12T00:09:37.093463083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-gv7dm,Uid:eb58e40d-98fa-4b77-aa58-30c336d0d01d,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:37.098880 containerd[2033]: time="2025-07-12T00:09:37.098677815Z" level=info msg="TearDown network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" successfully" Jul 12 00:09:37.098880 containerd[2033]: time="2025-07-12T00:09:37.098748591Z" level=info msg="StopPodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" returns successfully" Jul 12 00:09:37.099768 containerd[2033]: time="2025-07-12T00:09:37.099685695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkbd2,Uid:2cfa5752-993e-4842-a5b4-cf0d08ec1a3c,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.617 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.618 [INFO][5457] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" iface="eth0" netns="/var/run/netns/cni-0322bafa-a88d-4b86-f9c7-5bdd2a96a8e1" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.619 [INFO][5457] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" iface="eth0" netns="/var/run/netns/cni-0322bafa-a88d-4b86-f9c7-5bdd2a96a8e1" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.620 [INFO][5457] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" iface="eth0" netns="/var/run/netns/cni-0322bafa-a88d-4b86-f9c7-5bdd2a96a8e1" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.620 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.621 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.910 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:36.910 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:37.054 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:37.114 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:37.114 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:37.129 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:37.180975 containerd[2033]: 2025-07-12 00:09:37.162 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:37.184660 containerd[2033]: time="2025-07-12T00:09:37.182796652Z" level=info msg="TearDown network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" successfully" Jul 12 00:09:37.184660 containerd[2033]: time="2025-07-12T00:09:37.182864764Z" level=info msg="StopPodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" returns successfully" Jul 12 00:09:37.184660 containerd[2033]: time="2025-07-12T00:09:37.184413376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-twq6n,Uid:fdc693eb-7dfe-45fd-8cc7-68be5365972b,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:09:37.467884 systemd-networkd[1944]: calie4186c3056f: Gained IPv6LL Jul 12 00:09:37.628281 systemd[1]: run-netns-cni\x2d0322bafa\x2da88d\x2d4b86\x2df9c7\x2d5bdd2a96a8e1.mount: Deactivated successfully. Jul 12 00:09:37.819439 containerd[2033]: time="2025-07-12T00:09:37.819345007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-p6rbw,Uid:7b617ba3-8b27-4a18-bcbf-668944552e8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a\"" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.361 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.364 [INFO][5540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" iface="eth0" netns="/var/run/netns/cni-48bb0e64-e26d-5624-2e79-1f409228cd3f" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.366 [INFO][5540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" iface="eth0" netns="/var/run/netns/cni-48bb0e64-e26d-5624-2e79-1f409228cd3f" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.373 [INFO][5540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" iface="eth0" netns="/var/run/netns/cni-48bb0e64-e26d-5624-2e79-1f409228cd3f" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.373 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.375 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.777 [INFO][5589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.784 [INFO][5589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.785 [INFO][5589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.869 [WARNING][5589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.870 [INFO][5589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.878 [INFO][5589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:37.904923 containerd[2033]: 2025-07-12 00:09:37.894 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:37.911079 containerd[2033]: time="2025-07-12T00:09:37.910775335Z" level=info msg="TearDown network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" successfully" Jul 12 00:09:37.911079 containerd[2033]: time="2025-07-12T00:09:37.910847131Z" level=info msg="StopPodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" returns successfully" Jul 12 00:09:37.917265 containerd[2033]: time="2025-07-12T00:09:37.917143351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6799c8fbbc-xvgkw,Uid:fef89b6f-afd5-48ce-ab61-615671060a43,Namespace:calico-system,Attempt:1,}" Jul 12 00:09:37.924081 systemd[1]: run-netns-cni\x2d48bb0e64\x2de26d\x2d5624\x2d2e79\x2d1f409228cd3f.mount: Deactivated successfully. Jul 12 00:09:38.153691 systemd-networkd[1944]: cali7cc4fb8f98a: Link UP Jul 12 00:09:38.155922 systemd-networkd[1944]: cali7cc4fb8f98a: Gained carrier Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.605 [INFO][5575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0 calico-apiserver-7f77c98ccf- calico-apiserver fdc693eb-7dfe-45fd-8cc7-68be5365972b 970 0 2025-07-12 00:09:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f77c98ccf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-146 calico-apiserver-7f77c98ccf-twq6n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cc4fb8f98a [] [] }} ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.606 [INFO][5575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.852 [INFO][5611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" HandleID="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.853 [INFO][5611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" HandleID="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000374720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-146", "pod":"calico-apiserver-7f77c98ccf-twq6n", "timestamp":"2025-07-12 00:09:37.852661231 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.858 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.879 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.879 [INFO][5611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.933 [INFO][5611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:37.962 [INFO][5611] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.024 [INFO][5611] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.040 [INFO][5611] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.049 [INFO][5611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.051 [INFO][5611] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.067 [INFO][5611] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508 Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.088 [INFO][5611] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.109 [INFO][5611] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.5/26] block=192.168.110.0/26 handle="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.109 [INFO][5611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.5/26] handle="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" host="ip-172-31-28-146" Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.110 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:38.233112 containerd[2033]: 2025-07-12 00:09:38.110 [INFO][5611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.5/26] IPv6=[] ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" HandleID="k8s-pod-network.fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.236186 containerd[2033]: 2025-07-12 00:09:38.127 [INFO][5575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdc693eb-7dfe-45fd-8cc7-68be5365972b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"calico-apiserver-7f77c98ccf-twq6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc4fb8f98a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.236186 containerd[2033]: 2025-07-12 00:09:38.127 [INFO][5575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.5/32] ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.236186 containerd[2033]: 2025-07-12 00:09:38.127 [INFO][5575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cc4fb8f98a ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.236186 containerd[2033]: 2025-07-12 00:09:38.160 [INFO][5575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.236186 containerd[2033]: 2025-07-12 00:09:38.171 [INFO][5575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdc693eb-7dfe-45fd-8cc7-68be5365972b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508", Pod:"calico-apiserver-7f77c98ccf-twq6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc4fb8f98a", MAC:"b2:a1:27:a2:ec:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.236186 containerd[2033]: 2025-07-12 00:09:38.212 [INFO][5575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-twq6n" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:38.332821 systemd-networkd[1944]: calia2c55c28ebf: Link UP Jul 12 00:09:38.335373 systemd-networkd[1944]: calia2c55c28ebf: Gained carrier Jul 12 00:09:38.403914 containerd[2033]: time="2025-07-12T00:09:38.401714082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:38.403914 containerd[2033]: time="2025-07-12T00:09:38.401830902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:38.403914 containerd[2033]: time="2025-07-12T00:09:38.401870046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:38.403914 containerd[2033]: time="2025-07-12T00:09:38.402026814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:37.534 [INFO][5552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0 csi-node-driver- calico-system 2cfa5752-993e-4842-a5b4-cf0d08ec1a3c 968 0 2025-07-12 00:09:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-146 csi-node-driver-tkbd2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia2c55c28ebf [] [] }} ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:37.534 [INFO][5552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:37.918 [INFO][5606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" HandleID="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:37.918 [INFO][5606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" HandleID="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-146", "pod":"csi-node-driver-tkbd2", "timestamp":"2025-07-12 00:09:37.918145975 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:37.918 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.111 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.111 [INFO][5606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.140 [INFO][5606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.166 [INFO][5606] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.191 [INFO][5606] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.204 [INFO][5606] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.224 [INFO][5606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.226 [INFO][5606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.235 [INFO][5606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.249 [INFO][5606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.281 [INFO][5606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.6/26] block=192.168.110.0/26 handle="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.282 [INFO][5606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.6/26] handle="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" host="ip-172-31-28-146" Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.283 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:38.407749 containerd[2033]: 2025-07-12 00:09:38.283 [INFO][5606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.6/26] IPv6=[] ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" HandleID="k8s-pod-network.04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.412323 containerd[2033]: 2025-07-12 00:09:38.308 [INFO][5552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"csi-node-driver-tkbd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2c55c28ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.412323 containerd[2033]: 2025-07-12 00:09:38.311 [INFO][5552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.6/32] ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.412323 containerd[2033]: 2025-07-12 00:09:38.311 [INFO][5552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2c55c28ebf ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.412323 containerd[2033]: 2025-07-12 00:09:38.349 [INFO][5552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.412323 containerd[2033]: 2025-07-12 00:09:38.353 [INFO][5552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a", Pod:"csi-node-driver-tkbd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2c55c28ebf", MAC:"6e:38:8e:c7:fe:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.412323 containerd[2033]: 2025-07-12 00:09:38.391 [INFO][5552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a" Namespace="calico-system" Pod="csi-node-driver-tkbd2" WorkloadEndpoint="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:38.504648 systemd[1]: Started cri-containerd-fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508.scope - libcontainer container fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508. Jul 12 00:09:38.565846 systemd-networkd[1944]: calia861eb9417f: Link UP Jul 12 00:09:38.576400 systemd-networkd[1944]: calia861eb9417f: Gained carrier Jul 12 00:09:38.586774 containerd[2033]: time="2025-07-12T00:09:38.581667870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:38.586774 containerd[2033]: time="2025-07-12T00:09:38.582031074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:38.586774 containerd[2033]: time="2025-07-12T00:09:38.582147930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:38.586774 containerd[2033]: time="2025-07-12T00:09:38.585246702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:37.511 [INFO][5547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0 calico-apiserver-7f77c98ccf- calico-apiserver eb58e40d-98fa-4b77-aa58-30c336d0d01d 967 0 2025-07-12 00:09:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f77c98ccf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-146 calico-apiserver-7f77c98ccf-gv7dm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia861eb9417f [] [] }} ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:37.513 [INFO][5547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:37.920 [INFO][5599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" HandleID="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:37.920 [INFO][5599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" HandleID="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121bd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-146", "pod":"calico-apiserver-7f77c98ccf-gv7dm", "timestamp":"2025-07-12 00:09:37.920451115 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:37.920 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.284 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.284 [INFO][5599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.351 [INFO][5599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.378 [INFO][5599] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.406 [INFO][5599] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.418 [INFO][5599] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.444 [INFO][5599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.446 [INFO][5599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.457 [INFO][5599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.478 [INFO][5599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.516 [INFO][5599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.7/26] block=192.168.110.0/26 handle="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.516 [INFO][5599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.7/26] handle="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" host="ip-172-31-28-146" Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.516 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:38.684245 containerd[2033]: 2025-07-12 00:09:38.517 [INFO][5599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.7/26] IPv6=[] ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" HandleID="k8s-pod-network.ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.690709 containerd[2033]: 2025-07-12 00:09:38.526 [INFO][5547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb58e40d-98fa-4b77-aa58-30c336d0d01d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"calico-apiserver-7f77c98ccf-gv7dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia861eb9417f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.690709 containerd[2033]: 2025-07-12 00:09:38.529 [INFO][5547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.7/32] ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.690709 containerd[2033]: 2025-07-12 00:09:38.530 [INFO][5547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia861eb9417f ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.690709 containerd[2033]: 2025-07-12 00:09:38.588 [INFO][5547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.690709 containerd[2033]: 2025-07-12 00:09:38.590 [INFO][5547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb58e40d-98fa-4b77-aa58-30c336d0d01d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d", Pod:"calico-apiserver-7f77c98ccf-gv7dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia861eb9417f", MAC:"86:65:4b:fb:e1:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.690709 containerd[2033]: 2025-07-12 00:09:38.665 [INFO][5547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d" Namespace="calico-apiserver" Pod="calico-apiserver-7f77c98ccf-gv7dm" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:38.776946 systemd[1]: Started cri-containerd-04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a.scope - libcontainer container 04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a. Jul 12 00:09:38.856263 systemd-networkd[1944]: cali09abcdfb455: Link UP Jul 12 00:09:38.867736 systemd-networkd[1944]: cali09abcdfb455: Gained carrier Jul 12 00:09:38.915920 containerd[2033]: time="2025-07-12T00:09:38.906566720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:38.915920 containerd[2033]: time="2025-07-12T00:09:38.915570416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:38.915920 containerd[2033]: time="2025-07-12T00:09:38.915625568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:38.915920 containerd[2033]: time="2025-07-12T00:09:38.915814808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:38.962155 containerd[2033]: time="2025-07-12T00:09:38.961460816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-twq6n,Uid:fdc693eb-7dfe-45fd-8cc7-68be5365972b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508\"" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.087 [INFO][5633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0 calico-kube-controllers-6799c8fbbc- calico-system fef89b6f-afd5-48ce-ab61-615671060a43 986 0 2025-07-12 00:09:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6799c8fbbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-146 calico-kube-controllers-6799c8fbbc-xvgkw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali09abcdfb455 [] [] }} ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.089 [INFO][5633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.264 [INFO][5648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" HandleID="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.266 [INFO][5648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" HandleID="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000181850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-146", "pod":"calico-kube-controllers-6799c8fbbc-xvgkw", "timestamp":"2025-07-12 00:09:38.264875549 +0000 UTC"}, Hostname:"ip-172-31-28-146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.266 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.516 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.518 [INFO][5648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-146' Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.583 [INFO][5648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.636 [INFO][5648] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.685 [INFO][5648] ipam/ipam.go 511: Trying affinity for 192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.704 [INFO][5648] ipam/ipam.go 158: Attempting to load block cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.716 [INFO][5648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.716 [INFO][5648] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.739 [INFO][5648] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962 Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.763 [INFO][5648] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.794 [INFO][5648] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.110.8/26] block=192.168.110.0/26 handle="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.794 [INFO][5648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.110.8/26] handle="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" host="ip-172-31-28-146" Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.795 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:38.988102 containerd[2033]: 2025-07-12 00:09:38.795 [INFO][5648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.8/26] IPv6=[] ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" HandleID="k8s-pod-network.7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:38.991297 containerd[2033]: 2025-07-12 00:09:38.816 [INFO][5633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0", GenerateName:"calico-kube-controllers-6799c8fbbc-", Namespace:"calico-system", SelfLink:"", UID:"fef89b6f-afd5-48ce-ab61-615671060a43", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6799c8fbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"", Pod:"calico-kube-controllers-6799c8fbbc-xvgkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09abcdfb455", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.991297 containerd[2033]: 2025-07-12 00:09:38.821 [INFO][5633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.110.8/32] ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:38.991297 containerd[2033]: 2025-07-12 00:09:38.823 [INFO][5633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09abcdfb455 ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:38.991297 containerd[2033]: 2025-07-12 00:09:38.901 [INFO][5633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:38.991297 containerd[2033]: 2025-07-12 00:09:38.905 [INFO][5633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0", GenerateName:"calico-kube-controllers-6799c8fbbc-", Namespace:"calico-system", SelfLink:"", UID:"fef89b6f-afd5-48ce-ab61-615671060a43", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6799c8fbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962", Pod:"calico-kube-controllers-6799c8fbbc-xvgkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09abcdfb455", MAC:"1a:3a:d5:77:5b:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:38.991297 containerd[2033]: 2025-07-12 00:09:38.973 [INFO][5633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962" Namespace="calico-system" Pod="calico-kube-controllers-6799c8fbbc-xvgkw" WorkloadEndpoint="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:39.025115 containerd[2033]: time="2025-07-12T00:09:39.025037249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tkbd2,Uid:2cfa5752-993e-4842-a5b4-cf0d08ec1a3c,Namespace:calico-system,Attempt:1,} returns sandbox id \"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a\"" Jul 12 00:09:39.074950 systemd[1]: Started cri-containerd-ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d.scope - libcontainer container ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d. Jul 12 00:09:39.130063 containerd[2033]: time="2025-07-12T00:09:39.129319025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:39.132031 containerd[2033]: time="2025-07-12T00:09:39.129428801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:39.132031 containerd[2033]: time="2025-07-12T00:09:39.131031533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:39.132031 containerd[2033]: time="2025-07-12T00:09:39.131210645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:39.191658 containerd[2033]: time="2025-07-12T00:09:39.190985633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:39.191476 systemd[1]: Started cri-containerd-7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962.scope - libcontainer container 7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962. Jul 12 00:09:39.195829 containerd[2033]: time="2025-07-12T00:09:39.195337314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:09:39.197858 containerd[2033]: time="2025-07-12T00:09:39.197687142Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:39.213976 containerd[2033]: time="2025-07-12T00:09:39.213831786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:39.220929 containerd[2033]: time="2025-07-12T00:09:39.218669454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 5.655459448s" Jul 12 00:09:39.220929 containerd[2033]: time="2025-07-12T00:09:39.218869566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:09:39.227927 containerd[2033]: time="2025-07-12T00:09:39.226580202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:09:39.235797 containerd[2033]: time="2025-07-12T00:09:39.235737078Z" level=info msg="CreateContainer within sandbox \"537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:09:39.261669 containerd[2033]: time="2025-07-12T00:09:39.261184050Z" level=info msg="CreateContainer within sandbox \"537b813038dc6abacd1c96307923a4d1d5ffcddfd2f71b041f019589620e1941\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ba6312e2fad339edbc0f1f3261181cf9befc1a52086a9fdcced395bab4560c88\"" Jul 12 00:09:39.263700 containerd[2033]: time="2025-07-12T00:09:39.263576790Z" level=info msg="StartContainer for \"ba6312e2fad339edbc0f1f3261181cf9befc1a52086a9fdcced395bab4560c88\"" Jul 12 00:09:39.360107 systemd[1]: Started cri-containerd-ba6312e2fad339edbc0f1f3261181cf9befc1a52086a9fdcced395bab4560c88.scope - libcontainer container ba6312e2fad339edbc0f1f3261181cf9befc1a52086a9fdcced395bab4560c88. Jul 12 00:09:39.424368 containerd[2033]: time="2025-07-12T00:09:39.424150579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f77c98ccf-gv7dm,Uid:eb58e40d-98fa-4b77-aa58-30c336d0d01d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d\"" Jul 12 00:09:39.533801 containerd[2033]: time="2025-07-12T00:09:39.533687299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6799c8fbbc-xvgkw,Uid:fef89b6f-afd5-48ce-ab61-615671060a43,Namespace:calico-system,Attempt:1,} returns sandbox id \"7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962\"" Jul 12 00:09:39.580389 containerd[2033]: time="2025-07-12T00:09:39.580212403Z" level=info msg="StartContainer for \"ba6312e2fad339edbc0f1f3261181cf9befc1a52086a9fdcced395bab4560c88\" returns successfully" Jul 12 00:09:39.614280 systemd[1]: run-containerd-runc-k8s.io-ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d-runc.Z2SX7g.mount: Deactivated successfully. Jul 12 00:09:39.614505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914712906.mount: Deactivated successfully. Jul 12 00:09:39.701569 kubelet[3541]: I0712 00:09:39.700799 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64d48ddf6c-tjv4h" podStartSLOduration=1.713605584 podStartE2EDuration="9.700776068s" podCreationTimestamp="2025-07-12 00:09:30 +0000 UTC" firstStartedPulling="2025-07-12 00:09:31.237253414 +0000 UTC m=+49.479983923" lastFinishedPulling="2025-07-12 00:09:39.22442391 +0000 UTC m=+57.467154407" observedRunningTime="2025-07-12 00:09:39.700520156 +0000 UTC m=+57.943250701" watchObservedRunningTime="2025-07-12 00:09:39.700776068 +0000 UTC m=+57.943506589" Jul 12 00:09:40.027988 systemd-networkd[1944]: cali7cc4fb8f98a: Gained IPv6LL Jul 12 00:09:40.156080 systemd-networkd[1944]: calia861eb9417f: Gained IPv6LL Jul 12 00:09:40.220584 systemd-networkd[1944]: calia2c55c28ebf: Gained IPv6LL Jul 12 00:09:40.861317 systemd-networkd[1944]: cali09abcdfb455: Gained IPv6LL Jul 12 00:09:41.408999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019304879.mount: Deactivated successfully. Jul 12 00:09:41.982265 containerd[2033]: time="2025-07-12T00:09:41.982158767Z" level=info msg="StopPodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\"" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.108 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e699abac-0590-4217-8cf9-599543324b2d", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f", Pod:"coredns-674b8bbfcf-jpq74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02aab0f1e45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.110 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.110 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" iface="eth0" netns="" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.110 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.110 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.178 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.179 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.179 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.200 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.201 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.204 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:42.216720 containerd[2033]: 2025-07-12 00:09:42.208 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.216720 containerd[2033]: time="2025-07-12T00:09:42.216062997Z" level=info msg="TearDown network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" successfully" Jul 12 00:09:42.216720 containerd[2033]: time="2025-07-12T00:09:42.216212397Z" level=info msg="StopPodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" returns successfully" Jul 12 00:09:42.219347 containerd[2033]: time="2025-07-12T00:09:42.218439093Z" level=info msg="RemovePodSandbox for \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\"" Jul 12 00:09:42.219347 containerd[2033]: time="2025-07-12T00:09:42.218502861Z" level=info msg="Forcibly stopping sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\"" Jul 12 00:09:42.243835 containerd[2033]: time="2025-07-12T00:09:42.241856805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:42.249522 containerd[2033]: time="2025-07-12T00:09:42.249278541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:09:42.254912 containerd[2033]: time="2025-07-12T00:09:42.254347581Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:42.303781 containerd[2033]: time="2025-07-12T00:09:42.303074745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:42.308461 containerd[2033]: time="2025-07-12T00:09:42.308305593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.079807563s" Jul 12 00:09:42.309524 containerd[2033]: time="2025-07-12T00:09:42.309490293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:09:42.317116 containerd[2033]: time="2025-07-12T00:09:42.317070105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:42.326652 containerd[2033]: time="2025-07-12T00:09:42.325304757Z" level=info msg="CreateContainer within sandbox \"07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:09:42.398452 containerd[2033]: time="2025-07-12T00:09:42.398361033Z" level=info msg="CreateContainer within sandbox \"07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857\"" Jul 12 00:09:42.399910 containerd[2033]: time="2025-07-12T00:09:42.399851097Z" level=info msg="StartContainer for \"2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857\"" Jul 12 00:09:42.579953 systemd[1]: Started cri-containerd-2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857.scope - libcontainer container 2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857. Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.412 [WARNING][5946] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e699abac-0590-4217-8cf9-599543324b2d", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"2a0d22438097192fd09288d437c5225e6a1c48b8ca9f087838ba75eeb0e9753f", Pod:"coredns-674b8bbfcf-jpq74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02aab0f1e45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.412 [INFO][5946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.412 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" iface="eth0" netns="" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.412 [INFO][5946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.412 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.512 [INFO][5955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.512 [INFO][5955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.513 [INFO][5955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.566 [WARNING][5955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.566 [INFO][5955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" HandleID="k8s-pod-network.b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--jpq74-eth0" Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.592 [INFO][5955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:42.601408 containerd[2033]: 2025-07-12 00:09:42.598 [INFO][5946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb" Jul 12 00:09:42.602549 containerd[2033]: time="2025-07-12T00:09:42.601547494Z" level=info msg="TearDown network for sandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" successfully" Jul 12 00:09:42.614834 containerd[2033]: time="2025-07-12T00:09:42.614439814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:42.614834 containerd[2033]: time="2025-07-12T00:09:42.614684770Z" level=info msg="RemovePodSandbox \"b937ffc238fb8e6a7a5e0c1b6e42dc6187032facaae5cf85f1f4132caa28bbeb\" returns successfully" Jul 12 00:09:42.616632 containerd[2033]: time="2025-07-12T00:09:42.615622894Z" level=info msg="StopPodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\"" Jul 12 00:09:42.743113 systemd[1]: Started sshd@9-172.31.28.146:22-139.178.89.65:55728.service - OpenSSH per-connection server daemon (139.178.89.65:55728). Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.800 [WARNING][5985] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.801 [INFO][5985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.801 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" iface="eth0" netns="" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.802 [INFO][5985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.802 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.872 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.872 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.872 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.897 [WARNING][6003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.897 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.900 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:42.919967 containerd[2033]: 2025-07-12 00:09:42.913 [INFO][5985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:42.920776 containerd[2033]: time="2025-07-12T00:09:42.920037876Z" level=info msg="TearDown network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" successfully" Jul 12 00:09:42.920776 containerd[2033]: time="2025-07-12T00:09:42.920642292Z" level=info msg="StopPodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" returns successfully" Jul 12 00:09:42.922523 containerd[2033]: time="2025-07-12T00:09:42.922459680Z" level=info msg="RemovePodSandbox for \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\"" Jul 12 00:09:42.922523 containerd[2033]: time="2025-07-12T00:09:42.922522668Z" level=info msg="Forcibly stopping sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\"" Jul 12 00:09:42.950555 sshd[5994]: Accepted publickey for core from 139.178.89.65 port 55728 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:42.956954 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:42.978106 systemd-logind[2008]: New session 10 of user core. Jul 12 00:09:42.985394 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:09:43.039837 containerd[2033]: time="2025-07-12T00:09:43.037917633Z" level=info msg="StartContainer for \"2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857\" returns successfully" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.266 [WARNING][6020] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" WorkloadEndpoint="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.268 [INFO][6020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.268 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" iface="eth0" netns="" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.269 [INFO][6020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.269 [INFO][6020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.344 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.345 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.345 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.404 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.404 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" HandleID="k8s-pod-network.02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Workload="ip--172--31--28--146-k8s-whisker--757b9bc55c--pwn8m-eth0" Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.409 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:43.421837 containerd[2033]: 2025-07-12 00:09:43.416 [INFO][6020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0" Jul 12 00:09:43.422939 containerd[2033]: time="2025-07-12T00:09:43.421950238Z" level=info msg="TearDown network for sandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" successfully" Jul 12 00:09:43.434070 containerd[2033]: time="2025-07-12T00:09:43.433967819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:43.434534 containerd[2033]: time="2025-07-12T00:09:43.434078351Z" level=info msg="RemovePodSandbox \"02f6f10ebc9a47efddd6cc197c043c724de02e1a3f72c6bc6a2e53edab030ea0\" returns successfully" Jul 12 00:09:43.435820 containerd[2033]: time="2025-07-12T00:09:43.435754175Z" level=info msg="StopPodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\"" Jul 12 00:09:43.474925 sshd[5994]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:43.489526 systemd[1]: sshd@9-172.31.28.146:22-139.178.89.65:55728.service: Deactivated successfully. Jul 12 00:09:43.495278 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:09:43.503908 systemd-logind[2008]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:09:43.510987 systemd-logind[2008]: Removed session 10. Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.570 [WARNING][6065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0", GenerateName:"calico-kube-controllers-6799c8fbbc-", Namespace:"calico-system", SelfLink:"", UID:"fef89b6f-afd5-48ce-ab61-615671060a43", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6799c8fbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962", Pod:"calico-kube-controllers-6799c8fbbc-xvgkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09abcdfb455", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.571 [INFO][6065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.571 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" iface="eth0" netns="" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.571 [INFO][6065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.571 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.639 [INFO][6079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.640 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.640 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.668 [WARNING][6079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.668 [INFO][6079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.678 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:43.696887 containerd[2033]: 2025-07-12 00:09:43.691 [INFO][6065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:43.696887 containerd[2033]: time="2025-07-12T00:09:43.696845568Z" level=info msg="TearDown network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" successfully" Jul 12 00:09:43.698117 containerd[2033]: time="2025-07-12T00:09:43.696901608Z" level=info msg="StopPodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" returns successfully" Jul 12 00:09:43.700457 containerd[2033]: time="2025-07-12T00:09:43.700385580Z" level=info msg="RemovePodSandbox for \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\"" Jul 12 00:09:43.700457 containerd[2033]: time="2025-07-12T00:09:43.700453080Z" level=info msg="Forcibly stopping sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\"" Jul 12 00:09:43.712959 ntpd[2001]: Listen normally on 8 vxlan.calico 192.168.110.0:123 Jul 12 00:09:43.713748 ntpd[2001]: Listen normally on 9 calie2038c619c6 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 8 vxlan.calico 192.168.110.0:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 9 calie2038c619c6 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 10 vxlan.calico [fe80::64f5:8ff:fe90:6663%5]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 11 cali02aab0f1e45 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 12 caliad8fd4857e8 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 13 calie4186c3056f [fe80::ecee:eeff:feee:eeee%10]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 14 cali7cc4fb8f98a [fe80::ecee:eeff:feee:eeee%11]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 15 calia2c55c28ebf [fe80::ecee:eeff:feee:eeee%12]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 16 calia861eb9417f [fe80::ecee:eeff:feee:eeee%13]:123 Jul 12 00:09:43.715330 ntpd[2001]: 12 Jul 00:09:43 ntpd[2001]: Listen normally on 17 cali09abcdfb455 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 12 00:09:43.713833 ntpd[2001]: Listen normally on 10 vxlan.calico [fe80::64f5:8ff:fe90:6663%5]:123 Jul 12 00:09:43.713904 ntpd[2001]: Listen normally on 11 cali02aab0f1e45 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 12 00:09:43.713979 ntpd[2001]: Listen normally on 12 caliad8fd4857e8 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 12 00:09:43.714047 ntpd[2001]: Listen normally on 13 calie4186c3056f [fe80::ecee:eeff:feee:eeee%10]:123 Jul 12 00:09:43.714114 ntpd[2001]: Listen normally on 14 cali7cc4fb8f98a [fe80::ecee:eeff:feee:eeee%11]:123 Jul 12 00:09:43.714179 ntpd[2001]: Listen normally on 15 calia2c55c28ebf [fe80::ecee:eeff:feee:eeee%12]:123 Jul 12 00:09:43.714282 ntpd[2001]: Listen normally on 16 calia861eb9417f [fe80::ecee:eeff:feee:eeee%13]:123 Jul 12 00:09:43.714383 ntpd[2001]: Listen normally on 17 cali09abcdfb455 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 12 00:09:43.805436 kubelet[3541]: I0712 00:09:43.805001 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-p6rbw" podStartSLOduration=27.31881453 podStartE2EDuration="31.804976116s" podCreationTimestamp="2025-07-12 00:09:12 +0000 UTC" firstStartedPulling="2025-07-12 00:09:37.826590739 +0000 UTC m=+56.069321236" lastFinishedPulling="2025-07-12 00:09:42.312752313 +0000 UTC m=+60.555482822" observedRunningTime="2025-07-12 00:09:43.801873168 +0000 UTC m=+62.044603701" watchObservedRunningTime="2025-07-12 00:09:43.804976116 +0000 UTC m=+62.047706661" Jul 12 00:09:43.901663 systemd[1]: run-containerd-runc-k8s.io-2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857-runc.8ZUssx.mount: Deactivated successfully. Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:43.860 [WARNING][6095] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0", GenerateName:"calico-kube-controllers-6799c8fbbc-", Namespace:"calico-system", SelfLink:"", UID:"fef89b6f-afd5-48ce-ab61-615671060a43", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6799c8fbbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962", Pod:"calico-kube-controllers-6799c8fbbc-xvgkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09abcdfb455", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:43.861 [INFO][6095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:43.861 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" iface="eth0" netns="" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:43.861 [INFO][6095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:43.861 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.023 [INFO][6117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.026 [INFO][6117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.026 [INFO][6117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.045 [WARNING][6117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.045 [INFO][6117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" HandleID="k8s-pod-network.6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Workload="ip--172--31--28--146-k8s-calico--kube--controllers--6799c8fbbc--xvgkw-eth0" Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.049 [INFO][6117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:44.060861 containerd[2033]: 2025-07-12 00:09:44.053 [INFO][6095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad" Jul 12 00:09:44.060861 containerd[2033]: time="2025-07-12T00:09:44.060397954Z" level=info msg="TearDown network for sandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" successfully" Jul 12 00:09:44.070928 containerd[2033]: time="2025-07-12T00:09:44.070799218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:44.071086 containerd[2033]: time="2025-07-12T00:09:44.070956142Z" level=info msg="RemovePodSandbox \"6c54dab7e33dc5f92859eae02b05b5df807cb5d335100c71999c433ee8bf86ad\" returns successfully" Jul 12 00:09:44.071867 containerd[2033]: time="2025-07-12T00:09:44.071818318Z" level=info msg="StopPodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\"" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.155 [WARNING][6142] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdc693eb-7dfe-45fd-8cc7-68be5365972b", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508", Pod:"calico-apiserver-7f77c98ccf-twq6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc4fb8f98a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.155 [INFO][6142] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.156 [INFO][6142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" iface="eth0" netns="" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.156 [INFO][6142] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.156 [INFO][6142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.214 [INFO][6152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.214 [INFO][6152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.215 [INFO][6152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.233 [WARNING][6152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.233 [INFO][6152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.236 [INFO][6152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:44.242758 containerd[2033]: 2025-07-12 00:09:44.239 [INFO][6142] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.243861 containerd[2033]: time="2025-07-12T00:09:44.242815007Z" level=info msg="TearDown network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" successfully" Jul 12 00:09:44.243861 containerd[2033]: time="2025-07-12T00:09:44.242853191Z" level=info msg="StopPodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" returns successfully" Jul 12 00:09:44.244121 containerd[2033]: time="2025-07-12T00:09:44.244059167Z" level=info msg="RemovePodSandbox for \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\"" Jul 12 00:09:44.244192 containerd[2033]: time="2025-07-12T00:09:44.244117487Z" level=info msg="Forcibly stopping sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\"" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.328 [WARNING][6167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdc693eb-7dfe-45fd-8cc7-68be5365972b", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508", Pod:"calico-apiserver-7f77c98ccf-twq6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cc4fb8f98a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.329 [INFO][6167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.329 [INFO][6167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" iface="eth0" netns="" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.329 [INFO][6167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.329 [INFO][6167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.383 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.383 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.383 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.398 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.399 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" HandleID="k8s-pod-network.90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--twq6n-eth0" Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.401 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:44.408515 containerd[2033]: 2025-07-12 00:09:44.403 [INFO][6167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac" Jul 12 00:09:44.408515 containerd[2033]: time="2025-07-12T00:09:44.407937851Z" level=info msg="TearDown network for sandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" successfully" Jul 12 00:09:44.420387 containerd[2033]: time="2025-07-12T00:09:44.419852579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:44.420387 containerd[2033]: time="2025-07-12T00:09:44.420019955Z" level=info msg="RemovePodSandbox \"90124293a88d93211003de0030ab932775ede4b9f63bc75654d44e5e2b2cdfac\" returns successfully" Jul 12 00:09:44.422031 containerd[2033]: time="2025-07-12T00:09:44.421560491Z" level=info msg="StopPodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\"" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.499 [WARNING][6188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a", Pod:"csi-node-driver-tkbd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2c55c28ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.500 [INFO][6188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.500 [INFO][6188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" iface="eth0" netns="" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.500 [INFO][6188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.500 [INFO][6188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.540 [INFO][6195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.541 [INFO][6195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.541 [INFO][6195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.557 [WARNING][6195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.557 [INFO][6195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.560 [INFO][6195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:44.568897 containerd[2033]: 2025-07-12 00:09:44.563 [INFO][6188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.571193 containerd[2033]: time="2025-07-12T00:09:44.569848308Z" level=info msg="TearDown network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" successfully" Jul 12 00:09:44.571193 containerd[2033]: time="2025-07-12T00:09:44.569975244Z" level=info msg="StopPodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" returns successfully" Jul 12 00:09:44.571193 containerd[2033]: time="2025-07-12T00:09:44.570803064Z" level=info msg="RemovePodSandbox for \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\"" Jul 12 00:09:44.571193 containerd[2033]: time="2025-07-12T00:09:44.570877008Z" level=info msg="Forcibly stopping sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\"" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.640 [WARNING][6211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2cfa5752-993e-4842-a5b4-cf0d08ec1a3c", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a", Pod:"csi-node-driver-tkbd2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2c55c28ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.640 [INFO][6211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.640 [INFO][6211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" iface="eth0" netns="" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.640 [INFO][6211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.640 [INFO][6211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.687 [INFO][6219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.687 [INFO][6219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.687 [INFO][6219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.708 [WARNING][6219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.708 [INFO][6219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" HandleID="k8s-pod-network.44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Workload="ip--172--31--28--146-k8s-csi--node--driver--tkbd2-eth0" Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.711 [INFO][6219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:44.716965 containerd[2033]: 2025-07-12 00:09:44.713 [INFO][6211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f" Jul 12 00:09:44.716965 containerd[2033]: time="2025-07-12T00:09:44.716926729Z" level=info msg="TearDown network for sandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" successfully" Jul 12 00:09:44.730529 containerd[2033]: time="2025-07-12T00:09:44.730457257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:44.730726 containerd[2033]: time="2025-07-12T00:09:44.730575817Z" level=info msg="RemovePodSandbox \"44f46ebddf6256e5e56b1cc9d7fb9529af9d892a6c9670d712df557f75ffcd2f\" returns successfully" Jul 12 00:09:44.731935 containerd[2033]: time="2025-07-12T00:09:44.731687965Z" level=info msg="StopPodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\"" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.834 [WARNING][6233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b617ba3-8b27-4a18-bcbf-668944552e8e", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a", Pod:"goldmane-768f4c5c69-p6rbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4186c3056f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.835 [INFO][6233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.835 [INFO][6233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" iface="eth0" netns="" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.835 [INFO][6233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.835 [INFO][6233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.900 [INFO][6257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.900 [INFO][6257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.900 [INFO][6257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.914 [WARNING][6257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.914 [INFO][6257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.917 [INFO][6257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:44.924885 containerd[2033]: 2025-07-12 00:09:44.920 [INFO][6233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:44.926257 containerd[2033]: time="2025-07-12T00:09:44.924922454Z" level=info msg="TearDown network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" successfully" Jul 12 00:09:44.926257 containerd[2033]: time="2025-07-12T00:09:44.924960194Z" level=info msg="StopPodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" returns successfully" Jul 12 00:09:44.928458 containerd[2033]: time="2025-07-12T00:09:44.926977742Z" level=info msg="RemovePodSandbox for \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\"" Jul 12 00:09:44.928458 containerd[2033]: time="2025-07-12T00:09:44.927033434Z" level=info msg="Forcibly stopping sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\"" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.018 [WARNING][6277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b617ba3-8b27-4a18-bcbf-668944552e8e", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"07db2aece57e747908b91645bbdb9eb6c36fd073ded594c3fa14a7e6074a843a", Pod:"goldmane-768f4c5c69-p6rbw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4186c3056f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.020 [INFO][6277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.020 [INFO][6277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" iface="eth0" netns="" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.020 [INFO][6277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.020 [INFO][6277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.063 [INFO][6285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.063 [INFO][6285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.063 [INFO][6285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.089 [WARNING][6285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.089 [INFO][6285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" HandleID="k8s-pod-network.3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Workload="ip--172--31--28--146-k8s-goldmane--768f4c5c69--p6rbw-eth0" Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.092 [INFO][6285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.099260 containerd[2033]: 2025-07-12 00:09:45.096 [INFO][6277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72" Jul 12 00:09:45.102745 containerd[2033]: time="2025-07-12T00:09:45.099313883Z" level=info msg="TearDown network for sandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" successfully" Jul 12 00:09:45.115358 containerd[2033]: time="2025-07-12T00:09:45.115282955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.115530 containerd[2033]: time="2025-07-12T00:09:45.115403927Z" level=info msg="RemovePodSandbox \"3968562002eff5907f2abb3382765115bff189a6bf81a18967d00a0b22069b72\" returns successfully" Jul 12 00:09:45.118155 containerd[2033]: time="2025-07-12T00:09:45.116775587Z" level=info msg="StopPodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\"" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.194 [WARNING][6299] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb58e40d-98fa-4b77-aa58-30c336d0d01d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d", Pod:"calico-apiserver-7f77c98ccf-gv7dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia861eb9417f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.195 [INFO][6299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.195 [INFO][6299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" iface="eth0" netns="" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.195 [INFO][6299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.195 [INFO][6299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.232 [INFO][6307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.232 [INFO][6307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.232 [INFO][6307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.248 [WARNING][6307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.248 [INFO][6307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.251 [INFO][6307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.256186 containerd[2033]: 2025-07-12 00:09:45.253 [INFO][6299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.257471 containerd[2033]: time="2025-07-12T00:09:45.256161168Z" level=info msg="TearDown network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" successfully" Jul 12 00:09:45.257471 containerd[2033]: time="2025-07-12T00:09:45.256705296Z" level=info msg="StopPodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" returns successfully" Jul 12 00:09:45.258267 containerd[2033]: time="2025-07-12T00:09:45.258200652Z" level=info msg="RemovePodSandbox for \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\"" Jul 12 00:09:45.258375 containerd[2033]: time="2025-07-12T00:09:45.258286128Z" level=info msg="Forcibly stopping sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\"" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.324 [WARNING][6321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0", GenerateName:"calico-apiserver-7f77c98ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb58e40d-98fa-4b77-aa58-30c336d0d01d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f77c98ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d", Pod:"calico-apiserver-7f77c98ccf-gv7dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia861eb9417f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.324 [INFO][6321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.324 [INFO][6321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" iface="eth0" netns="" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.324 [INFO][6321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.324 [INFO][6321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.361 [INFO][6328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.361 [INFO][6328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.361 [INFO][6328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.376 [WARNING][6328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.376 [INFO][6328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" HandleID="k8s-pod-network.01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Workload="ip--172--31--28--146-k8s-calico--apiserver--7f77c98ccf--gv7dm-eth0" Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.379 [INFO][6328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.385232 containerd[2033]: 2025-07-12 00:09:45.382 [INFO][6321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792" Jul 12 00:09:45.385232 containerd[2033]: time="2025-07-12T00:09:45.385184388Z" level=info msg="TearDown network for sandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" successfully" Jul 12 00:09:45.393408 containerd[2033]: time="2025-07-12T00:09:45.393345240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.393574 containerd[2033]: time="2025-07-12T00:09:45.393463368Z" level=info msg="RemovePodSandbox \"01131cdd76c87818bb6a8eaa67a66455d3bf05638f35cd6dda0530684f91d792\" returns successfully" Jul 12 00:09:45.394556 containerd[2033]: time="2025-07-12T00:09:45.394397100Z" level=info msg="StopPodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\"" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.469 [WARNING][6342] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69e41934-5662-47d2-a6ac-a7fd1f61f19b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54", Pod:"coredns-674b8bbfcf-68zpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad8fd4857e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.470 [INFO][6342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.470 [INFO][6342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" iface="eth0" netns="" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.470 [INFO][6342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.470 [INFO][6342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.509 [INFO][6349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.509 [INFO][6349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.509 [INFO][6349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.524 [WARNING][6349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.525 [INFO][6349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.530 [INFO][6349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.541949 containerd[2033]: 2025-07-12 00:09:45.538 [INFO][6342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.544906 containerd[2033]: time="2025-07-12T00:09:45.542002405Z" level=info msg="TearDown network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" successfully" Jul 12 00:09:45.544906 containerd[2033]: time="2025-07-12T00:09:45.542039305Z" level=info msg="StopPodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" returns successfully" Jul 12 00:09:45.544906 containerd[2033]: time="2025-07-12T00:09:45.543851245Z" level=info msg="RemovePodSandbox for \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\"" Jul 12 00:09:45.544906 containerd[2033]: time="2025-07-12T00:09:45.543929113Z" level=info msg="Forcibly stopping sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\"" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.611 [WARNING][6363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69e41934-5662-47d2-a6ac-a7fd1f61f19b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-146", ContainerID:"55ddb99d86f67731a448a462b92cb27c429072dcc3a00d5b446e285612a04b54", Pod:"coredns-674b8bbfcf-68zpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad8fd4857e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.611 [INFO][6363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.611 [INFO][6363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" iface="eth0" netns="" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.611 [INFO][6363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.611 [INFO][6363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.650 [INFO][6370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.650 [INFO][6370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.650 [INFO][6370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.666 [WARNING][6370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.666 [INFO][6370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" HandleID="k8s-pod-network.843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Workload="ip--172--31--28--146-k8s-coredns--674b8bbfcf--68zpm-eth0" Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.669 [INFO][6370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:45.677618 containerd[2033]: 2025-07-12 00:09:45.672 [INFO][6363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8" Jul 12 00:09:45.677618 containerd[2033]: time="2025-07-12T00:09:45.675540830Z" level=info msg="TearDown network for sandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" successfully" Jul 12 00:09:45.684259 containerd[2033]: time="2025-07-12T00:09:45.684128786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:45.684652 containerd[2033]: time="2025-07-12T00:09:45.684290078Z" level=info msg="RemovePodSandbox \"843c48152bb7d4782077308c7988c844f1f7a1407a80700ebf50549f245bd7d8\" returns successfully" Jul 12 00:09:47.141895 containerd[2033]: time="2025-07-12T00:09:47.141838405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:47.144277 containerd[2033]: time="2025-07-12T00:09:47.144165373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:09:47.147757 containerd[2033]: time="2025-07-12T00:09:47.146537521Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:47.159545 containerd[2033]: time="2025-07-12T00:09:47.159484621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:47.162899 containerd[2033]: time="2025-07-12T00:09:47.162820105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 4.845452256s" Jul 12 00:09:47.162899 containerd[2033]: time="2025-07-12T00:09:47.162890953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:47.166034 containerd[2033]: time="2025-07-12T00:09:47.165745645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:09:47.172355 containerd[2033]: time="2025-07-12T00:09:47.172074649Z" level=info msg="CreateContainer within sandbox \"fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:47.198975 containerd[2033]: time="2025-07-12T00:09:47.198587509Z" level=info msg="CreateContainer within sandbox \"fdfec874e3b4eb8365327d5a62e267fbe4912746cc1bedc712286dad33413508\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"026feed9d88cd63082b18a342db7060b6411bcd4210d4368fdfc78241d880d17\"" Jul 12 00:09:47.201270 containerd[2033]: time="2025-07-12T00:09:47.201214549Z" level=info msg="StartContainer for \"026feed9d88cd63082b18a342db7060b6411bcd4210d4368fdfc78241d880d17\"" Jul 12 00:09:47.275135 systemd[1]: run-containerd-runc-k8s.io-026feed9d88cd63082b18a342db7060b6411bcd4210d4368fdfc78241d880d17-runc.VmqpWV.mount: Deactivated successfully. Jul 12 00:09:47.287111 systemd[1]: Started cri-containerd-026feed9d88cd63082b18a342db7060b6411bcd4210d4368fdfc78241d880d17.scope - libcontainer container 026feed9d88cd63082b18a342db7060b6411bcd4210d4368fdfc78241d880d17. Jul 12 00:09:47.358055 containerd[2033]: time="2025-07-12T00:09:47.357890906Z" level=info msg="StartContainer for \"026feed9d88cd63082b18a342db7060b6411bcd4210d4368fdfc78241d880d17\" returns successfully" Jul 12 00:09:48.520154 systemd[1]: Started sshd@10-172.31.28.146:22-139.178.89.65:55738.service - OpenSSH per-connection server daemon (139.178.89.65:55738). Jul 12 00:09:48.595953 containerd[2033]: time="2025-07-12T00:09:48.592674352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.602864 containerd[2033]: time="2025-07-12T00:09:48.602770276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:09:48.606729 containerd[2033]: time="2025-07-12T00:09:48.605813092Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.622990 containerd[2033]: time="2025-07-12T00:09:48.622932052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:48.629331 containerd[2033]: time="2025-07-12T00:09:48.629052604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.463239819s" Jul 12 00:09:48.630964 containerd[2033]: time="2025-07-12T00:09:48.629904352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:09:48.637248 containerd[2033]: time="2025-07-12T00:09:48.636210028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:48.654353 containerd[2033]: time="2025-07-12T00:09:48.653553784Z" level=info msg="CreateContainer within sandbox \"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:09:48.708016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141904452.mount: Deactivated successfully. Jul 12 00:09:48.724111 containerd[2033]: time="2025-07-12T00:09:48.724033133Z" level=info msg="CreateContainer within sandbox \"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e5e348b1767efd75f774372e0483f35ee02826eb563fd3b6d6b59ca3fe485f80\"" Jul 12 00:09:48.726713 containerd[2033]: time="2025-07-12T00:09:48.726640433Z" level=info msg="StartContainer for \"e5e348b1767efd75f774372e0483f35ee02826eb563fd3b6d6b59ca3fe485f80\"" Jul 12 00:09:48.752077 sshd[6430]: Accepted publickey for core from 139.178.89.65 port 55738 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:48.756072 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:48.821024 systemd[1]: Started cri-containerd-e5e348b1767efd75f774372e0483f35ee02826eb563fd3b6d6b59ca3fe485f80.scope - libcontainer container e5e348b1767efd75f774372e0483f35ee02826eb563fd3b6d6b59ca3fe485f80. Jul 12 00:09:48.832080 systemd-logind[2008]: New session 11 of user core. Jul 12 00:09:48.840920 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:09:48.929800 containerd[2033]: time="2025-07-12T00:09:48.929727258Z" level=info msg="StartContainer for \"e5e348b1767efd75f774372e0483f35ee02826eb563fd3b6d6b59ca3fe485f80\" returns successfully" Jul 12 00:09:49.005758 containerd[2033]: time="2025-07-12T00:09:49.005696978Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:49.013483 containerd[2033]: time="2025-07-12T00:09:49.013402766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:09:49.022297 containerd[2033]: time="2025-07-12T00:09:49.021804446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 384.291374ms" Jul 12 00:09:49.022674 containerd[2033]: time="2025-07-12T00:09:49.022583930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:49.030767 containerd[2033]: time="2025-07-12T00:09:49.030714158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:09:49.042409 containerd[2033]: time="2025-07-12T00:09:49.041162786Z" level=info msg="CreateContainer within sandbox \"ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:49.095248 containerd[2033]: time="2025-07-12T00:09:49.093958023Z" level=info msg="CreateContainer within sandbox \"ef664f08abd9bbb80b20831be4282fe9ea22508adb3569f6ff356aa7ebed2c3d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7674e698d3014190c05ddc0b55bca601fb42bad65b85e4c4a2e416d0bcf3f4f1\"" Jul 12 00:09:49.099633 containerd[2033]: time="2025-07-12T00:09:49.096854367Z" level=info msg="StartContainer for \"7674e698d3014190c05ddc0b55bca601fb42bad65b85e4c4a2e416d0bcf3f4f1\"" Jul 12 00:09:49.176985 systemd[1]: Started cri-containerd-7674e698d3014190c05ddc0b55bca601fb42bad65b85e4c4a2e416d0bcf3f4f1.scope - libcontainer container 7674e698d3014190c05ddc0b55bca601fb42bad65b85e4c4a2e416d0bcf3f4f1. Jul 12 00:09:49.242535 sshd[6430]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:49.252302 systemd[1]: sshd@10-172.31.28.146:22-139.178.89.65:55738.service: Deactivated successfully. Jul 12 00:09:49.263514 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:09:49.268017 systemd-logind[2008]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:09:49.271512 systemd-logind[2008]: Removed session 11. Jul 12 00:09:49.306389 containerd[2033]: time="2025-07-12T00:09:49.306207424Z" level=info msg="StartContainer for \"7674e698d3014190c05ddc0b55bca601fb42bad65b85e4c4a2e416d0bcf3f4f1\" returns successfully" Jul 12 00:09:49.691277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176295022.mount: Deactivated successfully. Jul 12 00:09:49.855958 kubelet[3541]: I0712 00:09:49.855855 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f77c98ccf-twq6n" podStartSLOduration=41.663433065 podStartE2EDuration="49.855831342s" podCreationTimestamp="2025-07-12 00:09:00 +0000 UTC" firstStartedPulling="2025-07-12 00:09:38.971830568 +0000 UTC m=+57.214561077" lastFinishedPulling="2025-07-12 00:09:47.164228845 +0000 UTC m=+65.406959354" observedRunningTime="2025-07-12 00:09:47.824269984 +0000 UTC m=+66.067000493" watchObservedRunningTime="2025-07-12 00:09:49.855831342 +0000 UTC m=+68.098561851" Jul 12 00:09:49.856739 kubelet[3541]: I0712 00:09:49.856262 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f77c98ccf-gv7dm" podStartSLOduration=40.259009951 podStartE2EDuration="49.856248078s" podCreationTimestamp="2025-07-12 00:09:00 +0000 UTC" firstStartedPulling="2025-07-12 00:09:39.430074967 +0000 UTC m=+57.672805476" lastFinishedPulling="2025-07-12 00:09:49.027313082 +0000 UTC m=+67.270043603" observedRunningTime="2025-07-12 00:09:49.85374861 +0000 UTC m=+68.096479155" watchObservedRunningTime="2025-07-12 00:09:49.856248078 +0000 UTC m=+68.098978587" Jul 12 00:09:51.838733 kubelet[3541]: I0712 00:09:51.838664 3541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:52.222498 containerd[2033]: time="2025-07-12T00:09:52.222344742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:52.223942 containerd[2033]: time="2025-07-12T00:09:52.223874502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:09:52.227915 containerd[2033]: time="2025-07-12T00:09:52.226855710Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:52.235737 containerd[2033]: time="2025-07-12T00:09:52.235667190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:52.237943 containerd[2033]: time="2025-07-12T00:09:52.237885426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.204119716s" Jul 12 00:09:52.238178 containerd[2033]: time="2025-07-12T00:09:52.238147230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:09:52.241921 containerd[2033]: time="2025-07-12T00:09:52.241855074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:09:52.293054 containerd[2033]: time="2025-07-12T00:09:52.292999999Z" level=info msg="CreateContainer within sandbox \"7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:09:52.326643 containerd[2033]: time="2025-07-12T00:09:52.326475715Z" level=info msg="CreateContainer within sandbox \"7c65ff313ddc4376d1b7a059662d9835f45188de148dfc2fcd384d63050e7962\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351\"" Jul 12 00:09:52.329480 containerd[2033]: time="2025-07-12T00:09:52.329423011Z" level=info msg="StartContainer for \"cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351\"" Jul 12 00:09:52.427964 systemd[1]: Started cri-containerd-cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351.scope - libcontainer container cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351. Jul 12 00:09:52.588063 containerd[2033]: time="2025-07-12T00:09:52.587976692Z" level=info msg="StartContainer for \"cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351\" returns successfully" Jul 12 00:09:52.898733 kubelet[3541]: I0712 00:09:52.897830 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6799c8fbbc-xvgkw" podStartSLOduration=28.194060491 podStartE2EDuration="40.897806722s" podCreationTimestamp="2025-07-12 00:09:12 +0000 UTC" firstStartedPulling="2025-07-12 00:09:39.537895603 +0000 UTC m=+57.780626100" lastFinishedPulling="2025-07-12 00:09:52.241641834 +0000 UTC m=+70.484372331" observedRunningTime="2025-07-12 00:09:52.897488374 +0000 UTC m=+71.140218907" watchObservedRunningTime="2025-07-12 00:09:52.897806722 +0000 UTC m=+71.140537243" Jul 12 00:09:54.179906 containerd[2033]: time="2025-07-12T00:09:54.179837732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:54.181550 containerd[2033]: time="2025-07-12T00:09:54.181340288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:09:54.183676 containerd[2033]: time="2025-07-12T00:09:54.182503460Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:54.186486 containerd[2033]: time="2025-07-12T00:09:54.186378872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:54.189243 containerd[2033]: time="2025-07-12T00:09:54.187979816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.946056726s" Jul 12 00:09:54.189243 containerd[2033]: time="2025-07-12T00:09:54.188040968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:09:54.193580 containerd[2033]: time="2025-07-12T00:09:54.193359068Z" level=info msg="CreateContainer within sandbox \"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:09:54.215934 containerd[2033]: time="2025-07-12T00:09:54.215395196Z" level=info msg="CreateContainer within sandbox \"04e2c9d7f68a7751215809b17b82cbeefdb5cfb2b30a87f3486a0f67ebf6b57a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"33c33f4f88be9a96aba892b567f05b13aba1b96c971d6066fbc464b5d694fd17\"" Jul 12 00:09:54.216679 containerd[2033]: time="2025-07-12T00:09:54.216631952Z" level=info msg="StartContainer for \"33c33f4f88be9a96aba892b567f05b13aba1b96c971d6066fbc464b5d694fd17\"" Jul 12 00:09:54.291972 systemd[1]: Started cri-containerd-33c33f4f88be9a96aba892b567f05b13aba1b96c971d6066fbc464b5d694fd17.scope - libcontainer container 33c33f4f88be9a96aba892b567f05b13aba1b96c971d6066fbc464b5d694fd17. Jul 12 00:09:54.297964 systemd[1]: Started sshd@11-172.31.28.146:22-139.178.89.65:49046.service - OpenSSH per-connection server daemon (139.178.89.65:49046). Jul 12 00:09:54.367762 containerd[2033]: time="2025-07-12T00:09:54.367690557Z" level=info msg="StartContainer for \"33c33f4f88be9a96aba892b567f05b13aba1b96c971d6066fbc464b5d694fd17\" returns successfully" Jul 12 00:09:54.503547 sshd[6651]: Accepted publickey for core from 139.178.89.65 port 49046 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:54.505995 sshd[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:54.514572 systemd-logind[2008]: New session 12 of user core. Jul 12 00:09:54.519917 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:09:54.803014 sshd[6651]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:54.810216 systemd[1]: sshd@11-172.31.28.146:22-139.178.89.65:49046.service: Deactivated successfully. Jul 12 00:09:54.815513 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:09:54.817001 systemd-logind[2008]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:09:54.818843 systemd-logind[2008]: Removed session 12. Jul 12 00:09:54.843683 systemd[1]: Started sshd@12-172.31.28.146:22-139.178.89.65:49050.service - OpenSSH per-connection server daemon (139.178.89.65:49050). Jul 12 00:09:54.895541 kubelet[3541]: I0712 00:09:54.895419 3541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tkbd2" podStartSLOduration=27.764489456 podStartE2EDuration="42.895395839s" podCreationTimestamp="2025-07-12 00:09:12 +0000 UTC" firstStartedPulling="2025-07-12 00:09:39.058351709 +0000 UTC m=+57.301082218" lastFinishedPulling="2025-07-12 00:09:54.189258092 +0000 UTC m=+72.431988601" observedRunningTime="2025-07-12 00:09:54.894557147 +0000 UTC m=+73.137287680" watchObservedRunningTime="2025-07-12 00:09:54.895395839 +0000 UTC m=+73.138126348" Jul 12 00:09:55.035916 sshd[6685]: Accepted publickey for core from 139.178.89.65 port 49050 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:55.038754 sshd[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:55.047923 systemd-logind[2008]: New session 13 of user core. Jul 12 00:09:55.052939 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:09:55.195733 kubelet[3541]: I0712 00:09:55.195583 3541 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:09:55.195733 kubelet[3541]: I0712 00:09:55.195697 3541 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:09:55.426062 sshd[6685]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:55.476528 systemd[1]: sshd@12-172.31.28.146:22-139.178.89.65:49050.service: Deactivated successfully. Jul 12 00:09:55.483635 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:09:55.491000 systemd-logind[2008]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:09:55.500281 systemd[1]: Started sshd@13-172.31.28.146:22-139.178.89.65:49054.service - OpenSSH per-connection server daemon (139.178.89.65:49054). Jul 12 00:09:55.504660 systemd-logind[2008]: Removed session 13. Jul 12 00:09:55.696219 sshd[6697]: Accepted publickey for core from 139.178.89.65 port 49054 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:55.699104 sshd[6697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:55.709961 systemd-logind[2008]: New session 14 of user core. Jul 12 00:09:55.713933 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:09:55.977581 sshd[6697]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:55.985227 systemd[1]: sshd@13-172.31.28.146:22-139.178.89.65:49054.service: Deactivated successfully. Jul 12 00:09:55.992184 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:09:55.993855 systemd-logind[2008]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:09:55.996648 systemd-logind[2008]: Removed session 14. Jul 12 00:10:01.020146 systemd[1]: Started sshd@14-172.31.28.146:22-139.178.89.65:44614.service - OpenSSH per-connection server daemon (139.178.89.65:44614). Jul 12 00:10:01.215663 sshd[6736]: Accepted publickey for core from 139.178.89.65 port 44614 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:01.218398 sshd[6736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:01.226522 systemd-logind[2008]: New session 15 of user core. Jul 12 00:10:01.234950 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:10:01.484200 sshd[6736]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:01.491816 systemd[1]: sshd@14-172.31.28.146:22-139.178.89.65:44614.service: Deactivated successfully. Jul 12 00:10:01.498675 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:10:01.500207 systemd-logind[2008]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:10:01.502531 systemd-logind[2008]: Removed session 15. Jul 12 00:10:06.024392 update_engine[2009]: I20250712 00:10:06.024302 2009 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 12 00:10:06.024392 update_engine[2009]: I20250712 00:10:06.024386 2009 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 12 00:10:06.025045 update_engine[2009]: I20250712 00:10:06.024904 2009 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 12 00:10:06.026400 update_engine[2009]: I20250712 00:10:06.026340 2009 omaha_request_params.cc:62] Current group set to lts Jul 12 00:10:06.026644 update_engine[2009]: I20250712 00:10:06.026510 2009 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 12 00:10:06.026644 update_engine[2009]: I20250712 00:10:06.026541 2009 update_attempter.cc:643] Scheduling an action processor start. Jul 12 00:10:06.026644 update_engine[2009]: I20250712 00:10:06.026579 2009 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:10:06.026813 update_engine[2009]: I20250712 00:10:06.026674 2009 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 12 00:10:06.026813 update_engine[2009]: I20250712 00:10:06.026779 2009 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:10:06.026813 update_engine[2009]: I20250712 00:10:06.026799 2009 omaha_request_action.cc:272] Request: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.026813 update_engine[2009]: Jul 12 00:10:06.027296 update_engine[2009]: I20250712 00:10:06.026816 2009 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:10:06.027921 locksmithd[2048]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 12 00:10:06.034360 update_engine[2009]: I20250712 00:10:06.034283 2009 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:10:06.034947 update_engine[2009]: I20250712 00:10:06.034893 2009 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:10:06.080634 update_engine[2009]: E20250712 00:10:06.080542 2009 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:10:06.080774 update_engine[2009]: I20250712 00:10:06.080708 2009 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 12 00:10:06.521864 systemd[1]: Started sshd@15-172.31.28.146:22-139.178.89.65:44616.service - OpenSSH per-connection server daemon (139.178.89.65:44616). Jul 12 00:10:06.705334 sshd[6752]: Accepted publickey for core from 139.178.89.65 port 44616 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:06.708302 sshd[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:06.716422 systemd-logind[2008]: New session 16 of user core. Jul 12 00:10:06.725892 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:10:06.988811 sshd[6752]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:06.994542 systemd-logind[2008]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:10:06.995981 systemd[1]: sshd@15-172.31.28.146:22-139.178.89.65:44616.service: Deactivated successfully. Jul 12 00:10:07.001106 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:10:07.005452 systemd-logind[2008]: Removed session 16. Jul 12 00:10:12.030131 systemd[1]: Started sshd@16-172.31.28.146:22-139.178.89.65:49598.service - OpenSSH per-connection server daemon (139.178.89.65:49598). Jul 12 00:10:12.209647 sshd[6765]: Accepted publickey for core from 139.178.89.65 port 49598 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:12.212433 sshd[6765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:12.220656 systemd-logind[2008]: New session 17 of user core. Jul 12 00:10:12.228931 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:10:12.495537 sshd[6765]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:12.505857 systemd-logind[2008]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:10:12.507128 systemd[1]: sshd@16-172.31.28.146:22-139.178.89.65:49598.service: Deactivated successfully. Jul 12 00:10:12.514176 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:10:12.517968 systemd-logind[2008]: Removed session 17. Jul 12 00:10:16.024353 update_engine[2009]: I20250712 00:10:16.023653 2009 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:10:16.024353 update_engine[2009]: I20250712 00:10:16.024003 2009 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:10:16.024353 update_engine[2009]: I20250712 00:10:16.024289 2009 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:10:16.026010 update_engine[2009]: E20250712 00:10:16.025946 2009 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:10:16.028284 update_engine[2009]: I20250712 00:10:16.026245 2009 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 12 00:10:17.537393 systemd[1]: Started sshd@17-172.31.28.146:22-139.178.89.65:49608.service - OpenSSH per-connection server daemon (139.178.89.65:49608). Jul 12 00:10:17.741301 sshd[6807]: Accepted publickey for core from 139.178.89.65 port 49608 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:17.747019 sshd[6807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:17.760688 systemd-logind[2008]: New session 18 of user core. Jul 12 00:10:17.769002 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:10:18.110244 sshd[6807]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:18.118393 systemd[1]: sshd@17-172.31.28.146:22-139.178.89.65:49608.service: Deactivated successfully. Jul 12 00:10:18.123273 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:10:18.134730 systemd-logind[2008]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:10:18.162166 systemd[1]: Started sshd@18-172.31.28.146:22-139.178.89.65:49622.service - OpenSSH per-connection server daemon (139.178.89.65:49622). Jul 12 00:10:18.165870 systemd-logind[2008]: Removed session 18. Jul 12 00:10:18.378781 sshd[6820]: Accepted publickey for core from 139.178.89.65 port 49622 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:18.383486 sshd[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:18.396776 systemd-logind[2008]: New session 19 of user core. Jul 12 00:10:18.403924 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:10:19.106934 sshd[6820]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:19.115803 systemd[1]: sshd@18-172.31.28.146:22-139.178.89.65:49622.service: Deactivated successfully. Jul 12 00:10:19.122284 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:10:19.130390 systemd-logind[2008]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:10:19.158117 systemd[1]: Started sshd@19-172.31.28.146:22-139.178.89.65:49628.service - OpenSSH per-connection server daemon (139.178.89.65:49628). Jul 12 00:10:19.162087 systemd-logind[2008]: Removed session 19. Jul 12 00:10:19.367034 sshd[6831]: Accepted publickey for core from 139.178.89.65 port 49628 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:19.373167 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:19.385191 systemd-logind[2008]: New session 20 of user core. Jul 12 00:10:19.489089 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:10:21.139536 sshd[6831]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:21.150864 systemd[1]: sshd@19-172.31.28.146:22-139.178.89.65:49628.service: Deactivated successfully. Jul 12 00:10:21.160757 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:10:21.182933 systemd-logind[2008]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:10:21.189381 systemd[1]: Started sshd@20-172.31.28.146:22-139.178.89.65:39454.service - OpenSSH per-connection server daemon (139.178.89.65:39454). Jul 12 00:10:21.193438 systemd-logind[2008]: Removed session 20. Jul 12 00:10:21.395763 sshd[6850]: Accepted publickey for core from 139.178.89.65 port 39454 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:21.397877 sshd[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:21.410155 systemd-logind[2008]: New session 21 of user core. Jul 12 00:10:21.419989 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:10:22.107834 sshd[6850]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:22.119049 systemd[1]: sshd@20-172.31.28.146:22-139.178.89.65:39454.service: Deactivated successfully. Jul 12 00:10:22.124589 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:10:22.129715 systemd-logind[2008]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:10:22.149150 systemd[1]: Started sshd@21-172.31.28.146:22-139.178.89.65:39464.service - OpenSSH per-connection server daemon (139.178.89.65:39464). Jul 12 00:10:22.153125 systemd-logind[2008]: Removed session 21. Jul 12 00:10:22.346635 sshd[6865]: Accepted publickey for core from 139.178.89.65 port 39464 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:22.348963 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:22.359142 systemd-logind[2008]: New session 22 of user core. Jul 12 00:10:22.368189 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:10:22.714292 sshd[6865]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:22.721586 systemd[1]: sshd@21-172.31.28.146:22-139.178.89.65:39464.service: Deactivated successfully. Jul 12 00:10:22.727441 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:10:22.731967 systemd-logind[2008]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:10:22.735296 systemd-logind[2008]: Removed session 22. Jul 12 00:10:23.923415 systemd[1]: run-containerd-runc-k8s.io-cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351-runc.yNhUGN.mount: Deactivated successfully. Jul 12 00:10:26.024280 update_engine[2009]: I20250712 00:10:26.024180 2009 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:10:26.025676 update_engine[2009]: I20250712 00:10:26.025619 2009 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:10:26.026952 update_engine[2009]: I20250712 00:10:26.025961 2009 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:10:26.027269 update_engine[2009]: E20250712 00:10:26.027204 2009 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:10:26.027377 update_engine[2009]: I20250712 00:10:26.027309 2009 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 12 00:10:27.759840 systemd[1]: Started sshd@22-172.31.28.146:22-139.178.89.65:39466.service - OpenSSH per-connection server daemon (139.178.89.65:39466). Jul 12 00:10:27.947958 sshd[6915]: Accepted publickey for core from 139.178.89.65 port 39466 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:27.953000 sshd[6915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:27.963734 systemd-logind[2008]: New session 23 of user core. Jul 12 00:10:27.972995 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:10:28.248954 sshd[6915]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:28.258221 systemd[1]: sshd@22-172.31.28.146:22-139.178.89.65:39466.service: Deactivated successfully. Jul 12 00:10:28.266545 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:10:28.270879 systemd-logind[2008]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:10:28.275817 systemd-logind[2008]: Removed session 23. Jul 12 00:10:33.289381 systemd[1]: Started sshd@23-172.31.28.146:22-139.178.89.65:42566.service - OpenSSH per-connection server daemon (139.178.89.65:42566). Jul 12 00:10:33.478070 sshd[6952]: Accepted publickey for core from 139.178.89.65 port 42566 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:33.481483 sshd[6952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:33.493109 systemd-logind[2008]: New session 24 of user core. Jul 12 00:10:33.500256 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:10:33.851421 sshd[6952]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:33.865573 systemd[1]: sshd@23-172.31.28.146:22-139.178.89.65:42566.service: Deactivated successfully. Jul 12 00:10:33.871431 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:10:33.875354 systemd-logind[2008]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:10:33.879990 systemd-logind[2008]: Removed session 24. Jul 12 00:10:34.180932 systemd[1]: run-containerd-runc-k8s.io-cc2e5878bf78ab27b0308aa6420737922c820a302c50f79e3734815b15a98351-runc.SK1FnT.mount: Deactivated successfully. Jul 12 00:10:36.023362 update_engine[2009]: I20250712 00:10:36.022658 2009 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:10:36.023362 update_engine[2009]: I20250712 00:10:36.023012 2009 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:10:36.023362 update_engine[2009]: I20250712 00:10:36.023294 2009 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:10:36.025857 update_engine[2009]: E20250712 00:10:36.024263 2009 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024362 2009 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024384 2009 omaha_request_action.cc:617] Omaha request response: Jul 12 00:10:36.025857 update_engine[2009]: E20250712 00:10:36.024509 2009 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024549 2009 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024565 2009 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024581 2009 update_attempter.cc:306] Processing Done. Jul 12 00:10:36.025857 update_engine[2009]: E20250712 00:10:36.024632 2009 update_attempter.cc:619] Update failed. Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024655 2009 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024672 2009 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024688 2009 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024812 2009 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024852 2009 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:10:36.025857 update_engine[2009]: I20250712 00:10:36.024870 2009 omaha_request_action.cc:272] Request: Jul 12 00:10:36.025857 update_engine[2009]: Jul 12 00:10:36.025857 update_engine[2009]: Jul 12 00:10:36.026789 update_engine[2009]: Jul 12 00:10:36.026789 update_engine[2009]: Jul 12 00:10:36.026789 update_engine[2009]: Jul 12 00:10:36.026789 update_engine[2009]: Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.024888 2009 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.025158 2009 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.025446 2009 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:10:36.026789 update_engine[2009]: E20250712 00:10:36.026084 2009 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026224 2009 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026247 2009 omaha_request_action.cc:617] Omaha request response: Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026265 2009 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026281 2009 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026297 2009 update_attempter.cc:306] Processing Done. Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026314 2009 update_attempter.cc:310] Error event sent. Jul 12 00:10:36.026789 update_engine[2009]: I20250712 00:10:36.026337 2009 update_check_scheduler.cc:74] Next update check in 45m53s Jul 12 00:10:36.027466 locksmithd[2048]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 12 00:10:36.027466 locksmithd[2048]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 12 00:10:38.895068 systemd[1]: Started sshd@24-172.31.28.146:22-139.178.89.65:42582.service - OpenSSH per-connection server daemon (139.178.89.65:42582). Jul 12 00:10:39.100398 sshd[6985]: Accepted publickey for core from 139.178.89.65 port 42582 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:39.106417 sshd[6985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:39.120717 systemd-logind[2008]: New session 25 of user core. Jul 12 00:10:39.126951 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:10:39.426125 sshd[6985]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:39.437383 systemd[1]: sshd@24-172.31.28.146:22-139.178.89.65:42582.service: Deactivated successfully. Jul 12 00:10:39.443839 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:10:39.450397 systemd-logind[2008]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:10:39.453728 systemd-logind[2008]: Removed session 25. Jul 12 00:10:44.470250 systemd[1]: Started sshd@25-172.31.28.146:22-139.178.89.65:34418.service - OpenSSH per-connection server daemon (139.178.89.65:34418). Jul 12 00:10:44.652824 sshd[7000]: Accepted publickey for core from 139.178.89.65 port 34418 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:44.656752 sshd[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:44.665669 systemd-logind[2008]: New session 26 of user core. Jul 12 00:10:44.676885 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:10:45.037972 sshd[7000]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:45.046334 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:10:45.046715 systemd-logind[2008]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:10:45.051404 systemd[1]: sshd@25-172.31.28.146:22-139.178.89.65:34418.service: Deactivated successfully. Jul 12 00:10:45.059956 systemd-logind[2008]: Removed session 26. Jul 12 00:10:50.082511 systemd[1]: Started sshd@26-172.31.28.146:22-139.178.89.65:42544.service - OpenSSH per-connection server daemon (139.178.89.65:42544). Jul 12 00:10:50.294423 sshd[7037]: Accepted publickey for core from 139.178.89.65 port 42544 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:50.299150 sshd[7037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:50.311248 systemd-logind[2008]: New session 27 of user core. Jul 12 00:10:50.320012 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:10:50.637020 sshd[7037]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:50.649463 systemd[1]: sshd@26-172.31.28.146:22-139.178.89.65:42544.service: Deactivated successfully. Jul 12 00:10:50.656486 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:10:50.660359 systemd-logind[2008]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:10:50.663953 systemd-logind[2008]: Removed session 27. Jul 12 00:10:55.686124 systemd[1]: Started sshd@27-172.31.28.146:22-139.178.89.65:42552.service - OpenSSH per-connection server daemon (139.178.89.65:42552). Jul 12 00:10:55.892672 sshd[7078]: Accepted publickey for core from 139.178.89.65 port 42552 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:55.896827 sshd[7078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:55.910127 systemd-logind[2008]: New session 28 of user core. Jul 12 00:10:55.919464 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 12 00:10:56.207082 sshd[7078]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:56.215444 systemd[1]: sshd@27-172.31.28.146:22-139.178.89.65:42552.service: Deactivated successfully. Jul 12 00:10:56.220774 systemd[1]: session-28.scope: Deactivated successfully. Jul 12 00:10:56.222931 systemd-logind[2008]: Session 28 logged out. Waiting for processes to exit. Jul 12 00:10:56.226585 systemd-logind[2008]: Removed session 28. Jul 12 00:11:09.972170 systemd[1]: cri-containerd-a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4.scope: Deactivated successfully. Jul 12 00:11:09.973785 systemd[1]: cri-containerd-a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4.scope: Consumed 7.059s CPU time, 17.6M memory peak, 0B memory swap peak. Jul 12 00:11:10.034201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4-rootfs.mount: Deactivated successfully. Jul 12 00:11:10.049591 containerd[2033]: time="2025-07-12T00:11:10.030787209Z" level=info msg="shim disconnected" id=a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4 namespace=k8s.io Jul 12 00:11:10.049591 containerd[2033]: time="2025-07-12T00:11:10.049417845Z" level=warning msg="cleaning up after shim disconnected" id=a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4 namespace=k8s.io Jul 12 00:11:10.049591 containerd[2033]: time="2025-07-12T00:11:10.049446873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:10.171494 kubelet[3541]: I0712 00:11:10.171450 3541 scope.go:117] "RemoveContainer" containerID="a79e9c5137111e4e5c17d54457ee1cd7db65a29632078a2b658a74f5bcbfc4e4" Jul 12 00:11:10.177263 containerd[2033]: time="2025-07-12T00:11:10.176943657Z" level=info msg="CreateContainer within sandbox \"75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:11:10.211361 containerd[2033]: time="2025-07-12T00:11:10.211305310Z" level=info msg="CreateContainer within sandbox \"75b2f5e84b7a0d5c61bdb16725c931c1ef8f753510ae531f043e0d9f0ccd1ce2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d223aae157144c1b306243bd74b35a893bb200c2f64f31bb7e70ff8c201c8b0a\"" Jul 12 00:11:10.212383 containerd[2033]: time="2025-07-12T00:11:10.212214106Z" level=info msg="StartContainer for \"d223aae157144c1b306243bd74b35a893bb200c2f64f31bb7e70ff8c201c8b0a\"" Jul 12 00:11:10.270030 systemd[1]: run-containerd-runc-k8s.io-d223aae157144c1b306243bd74b35a893bb200c2f64f31bb7e70ff8c201c8b0a-runc.4jwvYJ.mount: Deactivated successfully. Jul 12 00:11:10.278942 systemd[1]: Started cri-containerd-d223aae157144c1b306243bd74b35a893bb200c2f64f31bb7e70ff8c201c8b0a.scope - libcontainer container d223aae157144c1b306243bd74b35a893bb200c2f64f31bb7e70ff8c201c8b0a. Jul 12 00:11:10.361814 containerd[2033]: time="2025-07-12T00:11:10.361571626Z" level=info msg="StartContainer for \"d223aae157144c1b306243bd74b35a893bb200c2f64f31bb7e70ff8c201c8b0a\" returns successfully" Jul 12 00:11:11.145250 systemd[1]: cri-containerd-b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e.scope: Deactivated successfully. Jul 12 00:11:11.147884 systemd[1]: cri-containerd-b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e.scope: Consumed 26.485s CPU time. Jul 12 00:11:11.201130 containerd[2033]: time="2025-07-12T00:11:11.200772022Z" level=info msg="shim disconnected" id=b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e namespace=k8s.io Jul 12 00:11:11.201130 containerd[2033]: time="2025-07-12T00:11:11.200850910Z" level=warning msg="cleaning up after shim disconnected" id=b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e namespace=k8s.io Jul 12 00:11:11.201130 containerd[2033]: time="2025-07-12T00:11:11.200871394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:11.213292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e-rootfs.mount: Deactivated successfully. Jul 12 00:11:12.191830 kubelet[3541]: I0712 00:11:12.191067 3541 scope.go:117] "RemoveContainer" containerID="b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e" Jul 12 00:11:12.198088 containerd[2033]: time="2025-07-12T00:11:12.197574551Z" level=info msg="CreateContainer within sandbox \"0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:11:12.229288 containerd[2033]: time="2025-07-12T00:11:12.228970008Z" level=info msg="CreateContainer within sandbox \"0b3983fe1f2fe948c6e3f3130adfeeed879a8ead97ebb9cd07896d7e5f08ff6a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a\"" Jul 12 00:11:12.231674 containerd[2033]: time="2025-07-12T00:11:12.230683716Z" level=info msg="StartContainer for \"a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a\"" Jul 12 00:11:12.299365 systemd[1]: run-containerd-runc-k8s.io-a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a-runc.Cf6hXp.mount: Deactivated successfully. Jul 12 00:11:12.312902 systemd[1]: Started cri-containerd-a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a.scope - libcontainer container a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a. Jul 12 00:11:12.371174 containerd[2033]: time="2025-07-12T00:11:12.371076636Z" level=info msg="StartContainer for \"a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a\" returns successfully" Jul 12 00:11:14.462875 kubelet[3541]: E0712 00:11:14.462433 3541 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-146?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:11:14.821788 systemd[1]: run-containerd-runc-k8s.io-2eec446c6e4b75755f4a355ec8b1bce3b5f3df167473d5731d125a4d81c2d857-runc.b8J2Cf.mount: Deactivated successfully. Jul 12 00:11:16.124851 systemd[1]: cri-containerd-3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765.scope: Deactivated successfully. Jul 12 00:11:16.125331 systemd[1]: cri-containerd-3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765.scope: Consumed 5.375s CPU time, 16.2M memory peak, 0B memory swap peak. Jul 12 00:11:16.164155 containerd[2033]: time="2025-07-12T00:11:16.162532731Z" level=info msg="shim disconnected" id=3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765 namespace=k8s.io Jul 12 00:11:16.164155 containerd[2033]: time="2025-07-12T00:11:16.162704955Z" level=warning msg="cleaning up after shim disconnected" id=3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765 namespace=k8s.io Jul 12 00:11:16.164155 containerd[2033]: time="2025-07-12T00:11:16.162727455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:16.175876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765-rootfs.mount: Deactivated successfully. Jul 12 00:11:17.211512 kubelet[3541]: I0712 00:11:17.211468 3541 scope.go:117] "RemoveContainer" containerID="3f1bc54f17d900d747dec2fe5cd1ce307424ecb829afa22885fbb95a1f24c765" Jul 12 00:11:17.215327 containerd[2033]: time="2025-07-12T00:11:17.215257504Z" level=info msg="CreateContainer within sandbox \"cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:11:17.250086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249290597.mount: Deactivated successfully. Jul 12 00:11:17.253007 containerd[2033]: time="2025-07-12T00:11:17.252657053Z" level=info msg="CreateContainer within sandbox \"cb90e3a71653a3b5a5f9fe701c5c348279f564cac903891a729517259949f9c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8f0f26a57bb58042fe241a25098f996c93222069611225b93f8618a240ead194\"" Jul 12 00:11:17.253731 containerd[2033]: time="2025-07-12T00:11:17.253684937Z" level=info msg="StartContainer for \"8f0f26a57bb58042fe241a25098f996c93222069611225b93f8618a240ead194\"" Jul 12 00:11:17.318911 systemd[1]: Started cri-containerd-8f0f26a57bb58042fe241a25098f996c93222069611225b93f8618a240ead194.scope - libcontainer container 8f0f26a57bb58042fe241a25098f996c93222069611225b93f8618a240ead194. Jul 12 00:11:17.383773 containerd[2033]: time="2025-07-12T00:11:17.383709497Z" level=info msg="StartContainer for \"8f0f26a57bb58042fe241a25098f996c93222069611225b93f8618a240ead194\" returns successfully" Jul 12 00:11:23.868773 systemd[1]: cri-containerd-a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a.scope: Deactivated successfully. Jul 12 00:11:23.927013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a-rootfs.mount: Deactivated successfully. Jul 12 00:11:23.938744 containerd[2033]: time="2025-07-12T00:11:23.938354162Z" level=info msg="shim disconnected" id=a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a namespace=k8s.io Jul 12 00:11:23.938744 containerd[2033]: time="2025-07-12T00:11:23.938430002Z" level=warning msg="cleaning up after shim disconnected" id=a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a namespace=k8s.io Jul 12 00:11:23.938744 containerd[2033]: time="2025-07-12T00:11:23.938450438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:24.238907 kubelet[3541]: I0712 00:11:24.238315 3541 scope.go:117] "RemoveContainer" containerID="b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e" Jul 12 00:11:24.239518 kubelet[3541]: I0712 00:11:24.238928 3541 scope.go:117] "RemoveContainer" containerID="a5bae84c88317460f7640fa5bd8af00377773cea501a261dc9d0e052dd08639a" Jul 12 00:11:24.239518 kubelet[3541]: E0712 00:11:24.239167 3541 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-gt6tb_tigera-operator(0a29a11a-42b7-4b22-947e-78ebd969e53b)\"" pod="tigera-operator/tigera-operator-747864d56d-gt6tb" podUID="0a29a11a-42b7-4b22-947e-78ebd969e53b" Jul 12 00:11:24.241784 containerd[2033]: time="2025-07-12T00:11:24.241685063Z" level=info msg="RemoveContainer for \"b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e\"" Jul 12 00:11:24.248953 containerd[2033]: time="2025-07-12T00:11:24.248866319Z" level=info msg="RemoveContainer for \"b322c4d2a07ba30da3bf056e553cb36cfce7553929c5fbe0ce4d4e76a43a4d6e\" returns successfully" Jul 12 00:11:24.463578 kubelet[3541]: E0712 00:11:24.463504 3541 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-146?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"