Jul 12 00:06:47.253662 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:06:47.253726 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:06:47.253753 kernel: KASLR disabled due to lack of seed Jul 12 00:06:47.253770 kernel: efi: EFI v2.7 by EDK II Jul 12 00:06:47.253786 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Jul 12 00:06:47.253801 kernel: ACPI: Early table checksum verification disabled Jul 12 00:06:47.253818 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:06:47.253833 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:06:47.253849 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:06:47.253864 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:06:47.253885 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:06:47.253900 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:06:47.253916 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:06:47.253932 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:06:47.253950 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:06:47.253971 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:06:47.253989 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:06:47.254006 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:06:47.254022 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:06:47.254038 kernel: printk: bootconsole [uart0] enabled Jul 12 00:06:47.254054 kernel: NUMA: Failed to initialise from firmware Jul 12 00:06:47.254071 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:06:47.254088 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 12 00:06:47.254104 kernel: Zone ranges: Jul 12 00:06:47.254121 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:06:47.254137 kernel: DMA32 empty Jul 12 00:06:47.254158 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:06:47.254175 kernel: Movable zone start for each node Jul 12 00:06:47.254191 kernel: Early memory node ranges Jul 12 00:06:47.254207 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:06:47.254223 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:06:47.254239 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:06:47.254256 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:06:47.254272 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:06:47.254288 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:06:47.254304 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:06:47.254320 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:06:47.254336 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:06:47.254357 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:06:47.254374 kernel: psci: probing for conduit method from ACPI. Jul 12 00:06:47.254397 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:06:47.254415 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:06:47.254432 kernel: psci: Trusted OS migration not required Jul 12 00:06:47.254454 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:06:47.254471 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:06:47.254489 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:06:47.254506 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:06:47.254524 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:06:47.254541 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:06:47.254558 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:06:47.254575 kernel: CPU features: detected: Spectre-v2 Jul 12 00:06:47.254592 kernel: CPU features: detected: Spectre-v3a Jul 12 00:06:47.254610 kernel: CPU features: detected: Spectre-BHB Jul 12 00:06:47.254627 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:06:47.254648 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:06:47.254666 kernel: alternatives: applying boot alternatives Jul 12 00:06:47.254686 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:06:47.254745 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:06:47.254764 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:06:47.254782 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:06:47.254799 kernel: Fallback order for Node 0: 0 Jul 12 00:06:47.254817 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:06:47.254834 kernel: Policy zone: Normal Jul 12 00:06:47.254865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:06:47.254888 kernel: software IO TLB: area num 2. Jul 12 00:06:47.254913 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:06:47.254931 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Jul 12 00:06:47.254948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:06:47.254965 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:06:47.254983 kernel: rcu: RCU event tracing is enabled. Jul 12 00:06:47.255000 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:06:47.255018 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:06:47.255035 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:06:47.255052 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:06:47.255069 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:06:47.255086 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:06:47.255108 kernel: GICv3: 96 SPIs implemented Jul 12 00:06:47.255125 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:06:47.255142 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:06:47.255159 kernel: GICv3: GICv3 features: 16 PPIs Jul 12 00:06:47.255176 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:06:47.255193 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:06:47.255210 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:06:47.255227 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:06:47.255244 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 12 00:06:47.255261 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:06:47.255279 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:06:47.255295 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:06:47.255317 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:06:47.255335 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:06:47.255352 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:06:47.255369 kernel: Console: colour dummy device 80x25 Jul 12 00:06:47.255387 kernel: printk: console [tty1] enabled Jul 12 00:06:47.255404 kernel: ACPI: Core revision 20230628 Jul 12 00:06:47.255422 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:06:47.255440 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:06:47.255457 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:06:47.255479 kernel: landlock: Up and running. Jul 12 00:06:47.255497 kernel: SELinux: Initializing. Jul 12 00:06:47.255514 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.255531 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.255549 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:06:47.255567 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:06:47.255585 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:06:47.255603 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:06:47.255620 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:06:47.255641 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:06:47.255659 kernel: Remapping and enabling EFI services. Jul 12 00:06:47.255676 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:06:47.255723 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:06:47.255745 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:06:47.255763 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 12 00:06:47.255780 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:06:47.255798 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:06:47.255815 kernel: SMP: Total of 2 processors activated. Jul 12 00:06:47.255832 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:06:47.255856 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:06:47.255874 kernel: CPU features: detected: CRC32 instructions Jul 12 00:06:47.255902 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:06:47.255924 kernel: alternatives: applying system-wide alternatives Jul 12 00:06:47.255942 kernel: devtmpfs: initialized Jul 12 00:06:47.255961 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:06:47.255979 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:06:47.255997 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:06:47.256015 kernel: SMBIOS 3.0.0 present. Jul 12 00:06:47.256038 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:06:47.256056 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:06:47.256074 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:06:47.256093 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:06:47.256111 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:06:47.256130 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:06:47.256148 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jul 12 00:06:47.256170 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:06:47.256189 kernel: cpuidle: using governor menu Jul 12 00:06:47.256207 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:06:47.256225 kernel: ASID allocator initialised with 65536 entries Jul 12 00:06:47.256243 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:06:47.256261 kernel: Serial: AMBA PL011 UART driver Jul 12 00:06:47.256280 kernel: Modules: 17488 pages in range for non-PLT usage Jul 12 00:06:47.256298 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:06:47.256316 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:06:47.256339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:06:47.256357 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:06:47.256376 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:06:47.256394 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:06:47.256412 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:06:47.256430 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:06:47.256448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:06:47.256466 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:06:47.256484 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:06:47.256507 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:06:47.256525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:06:47.256543 kernel: ACPI: Interpreter enabled Jul 12 00:06:47.256561 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:06:47.256579 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:06:47.256597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:06:47.257172 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:06:47.257397 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:06:47.257604 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:06:47.260400 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:06:47.260636 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:06:47.260664 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:06:47.260684 kernel: acpiphp: Slot [1] registered Jul 12 00:06:47.260725 kernel: acpiphp: Slot [2] registered Jul 12 00:06:47.260745 kernel: acpiphp: Slot [3] registered Jul 12 00:06:47.260764 kernel: acpiphp: Slot [4] registered Jul 12 00:06:47.260793 kernel: acpiphp: Slot [5] registered Jul 12 00:06:47.260812 kernel: acpiphp: Slot [6] registered Jul 12 00:06:47.260830 kernel: acpiphp: Slot [7] registered Jul 12 00:06:47.260849 kernel: acpiphp: Slot [8] registered Jul 12 00:06:47.260867 kernel: acpiphp: Slot [9] registered Jul 12 00:06:47.260886 kernel: acpiphp: Slot [10] registered Jul 12 00:06:47.260904 kernel: acpiphp: Slot [11] registered Jul 12 00:06:47.260922 kernel: acpiphp: Slot [12] registered Jul 12 00:06:47.260940 kernel: acpiphp: Slot [13] registered Jul 12 00:06:47.260959 kernel: acpiphp: Slot [14] registered Jul 12 00:06:47.260982 kernel: acpiphp: Slot [15] registered Jul 12 00:06:47.261001 kernel: acpiphp: Slot [16] registered Jul 12 00:06:47.261019 kernel: acpiphp: Slot [17] registered Jul 12 00:06:47.261037 kernel: acpiphp: Slot [18] registered Jul 12 00:06:47.261056 kernel: acpiphp: Slot [19] registered Jul 12 00:06:47.261074 kernel: acpiphp: Slot [20] registered Jul 12 00:06:47.261092 kernel: acpiphp: Slot [21] registered Jul 12 00:06:47.261110 kernel: acpiphp: Slot [22] registered Jul 12 00:06:47.261129 kernel: acpiphp: Slot [23] registered Jul 12 00:06:47.261152 kernel: acpiphp: Slot [24] registered Jul 12 00:06:47.261170 kernel: acpiphp: Slot [25] registered Jul 12 00:06:47.261189 kernel: acpiphp: Slot [26] registered Jul 12 00:06:47.261207 kernel: acpiphp: Slot [27] registered Jul 12 00:06:47.261225 kernel: acpiphp: Slot [28] registered Jul 12 00:06:47.261244 kernel: acpiphp: Slot [29] registered Jul 12 00:06:47.261262 kernel: acpiphp: Slot [30] registered Jul 12 00:06:47.261280 kernel: acpiphp: Slot [31] registered Jul 12 00:06:47.261298 kernel: PCI host bridge to bus 0000:00 Jul 12 00:06:47.261508 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:06:47.262279 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:06:47.265875 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:06:47.266096 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:06:47.266350 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:06:47.266576 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:06:47.267177 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:06:47.267418 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:06:47.267626 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:06:47.267875 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:06:47.268117 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:06:47.268330 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:06:47.268545 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:06:47.270948 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:06:47.271261 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:06:47.271484 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:06:47.271730 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:06:47.271962 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:06:47.272179 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:06:47.272407 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:06:47.272613 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:06:47.276063 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:06:47.276271 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:06:47.276298 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:06:47.276317 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:06:47.276337 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:06:47.276356 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:06:47.276374 kernel: iommu: Default domain type: Translated Jul 12 00:06:47.276393 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:06:47.276420 kernel: efivars: Registered efivars operations Jul 12 00:06:47.276439 kernel: vgaarb: loaded Jul 12 00:06:47.276457 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:06:47.276475 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:06:47.276494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:06:47.276512 kernel: pnp: PnP ACPI init Jul 12 00:06:47.276745 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:06:47.276776 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:06:47.276802 kernel: NET: Registered PF_INET protocol family Jul 12 00:06:47.276821 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:06:47.276840 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:06:47.276859 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:06:47.276878 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:06:47.276897 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:06:47.276915 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:06:47.276933 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.276951 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.276975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:06:47.276993 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:06:47.277011 kernel: kvm [1]: HYP mode not available Jul 12 00:06:47.277029 kernel: Initialise system trusted keyrings Jul 12 00:06:47.277047 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:06:47.277066 kernel: Key type asymmetric registered Jul 12 00:06:47.277084 kernel: Asymmetric key parser 'x509' registered Jul 12 00:06:47.277102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:06:47.277120 kernel: io scheduler mq-deadline registered Jul 12 00:06:47.277143 kernel: io scheduler kyber registered Jul 12 00:06:47.277161 kernel: io scheduler bfq registered Jul 12 00:06:47.277372 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:06:47.277400 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:06:47.277419 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:06:47.277438 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:06:47.277457 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:06:47.277476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:06:47.277501 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:06:47.280209 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:06:47.280253 kernel: printk: console [ttyS0] disabled Jul 12 00:06:47.280273 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:06:47.280292 kernel: printk: console [ttyS0] enabled Jul 12 00:06:47.280311 kernel: printk: bootconsole [uart0] disabled Jul 12 00:06:47.280329 kernel: thunder_xcv, ver 1.0 Jul 12 00:06:47.280347 kernel: thunder_bgx, ver 1.0 Jul 12 00:06:47.280365 kernel: nicpf, ver 1.0 Jul 12 00:06:47.280399 kernel: nicvf, ver 1.0 Jul 12 00:06:47.280770 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:06:47.280978 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:06:46 UTC (1752278806) Jul 12 00:06:47.281005 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:06:47.281025 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:06:47.281043 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:06:47.281061 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:06:47.281080 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:06:47.281105 kernel: Segment Routing with IPv6 Jul 12 00:06:47.281124 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:06:47.281142 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:06:47.281160 kernel: Key type dns_resolver registered Jul 12 00:06:47.281178 kernel: registered taskstats version 1 Jul 12 00:06:47.281196 kernel: Loading compiled-in X.509 certificates Jul 12 00:06:47.281215 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:06:47.281233 kernel: Key type .fscrypt registered Jul 12 00:06:47.281251 kernel: Key type fscrypt-provisioning registered Jul 12 00:06:47.281273 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:06:47.281292 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:06:47.281310 kernel: ima: No architecture policies found Jul 12 00:06:47.281328 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:06:47.281346 kernel: clk: Disabling unused clocks Jul 12 00:06:47.281364 kernel: Freeing unused kernel memory: 39424K Jul 12 00:06:47.281383 kernel: Run /init as init process Jul 12 00:06:47.281401 kernel: with arguments: Jul 12 00:06:47.281419 kernel: /init Jul 12 00:06:47.281437 kernel: with environment: Jul 12 00:06:47.281459 kernel: HOME=/ Jul 12 00:06:47.281478 kernel: TERM=linux Jul 12 00:06:47.281495 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:06:47.281518 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:06:47.281541 systemd[1]: Detected virtualization amazon. Jul 12 00:06:47.281561 systemd[1]: Detected architecture arm64. Jul 12 00:06:47.281581 systemd[1]: Running in initrd. Jul 12 00:06:47.281605 systemd[1]: No hostname configured, using default hostname. Jul 12 00:06:47.281625 systemd[1]: Hostname set to . Jul 12 00:06:47.281646 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:06:47.281666 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:06:47.281688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:47.281786 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:47.281810 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:06:47.281832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:06:47.281861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:06:47.281883 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:06:47.281906 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:06:47.281927 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:06:47.281947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:47.281968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:47.281988 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:06:47.282013 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:06:47.282033 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:06:47.282053 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:06:47.282073 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:06:47.282093 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:06:47.282114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:06:47.282134 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:06:47.282154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:47.282174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:47.282199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:47.282219 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:06:47.282239 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:06:47.282259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:06:47.282279 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:06:47.282299 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:06:47.282319 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:06:47.282339 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:06:47.282364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:47.282385 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:06:47.282405 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:47.282425 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:06:47.282446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:06:47.282517 systemd-journald[250]: Collecting audit messages is disabled. Jul 12 00:06:47.282563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:47.282584 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:06:47.282610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:06:47.282630 kernel: Bridge firewalling registered Jul 12 00:06:47.282650 systemd-journald[250]: Journal started Jul 12 00:06:47.282687 systemd-journald[250]: Runtime Journal (/run/log/journal/ec20f6b342176723b6bc2c5ed83b6b33) is 8.0M, max 75.3M, 67.3M free. Jul 12 00:06:47.231838 systemd-modules-load[251]: Inserted module 'overlay' Jul 12 00:06:47.281773 systemd-modules-load[251]: Inserted module 'br_netfilter' Jul 12 00:06:47.293031 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:06:47.301217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:47.301615 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:06:47.309989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:06:47.324288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:06:47.334009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:06:47.359848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:47.376629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:47.384301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:47.395159 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:06:47.405468 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:47.418743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:06:47.445736 dracut-cmdline[285]: dracut-dracut-053 Jul 12 00:06:47.455251 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:06:47.502604 systemd-resolved[288]: Positive Trust Anchors: Jul 12 00:06:47.502632 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:06:47.507021 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:06:47.612378 kernel: SCSI subsystem initialized Jul 12 00:06:47.619809 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:06:47.632812 kernel: iscsi: registered transport (tcp) Jul 12 00:06:47.655830 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:06:47.655907 kernel: QLogic iSCSI HBA Driver Jul 12 00:06:47.754835 kernel: random: crng init done Jul 12 00:06:47.755227 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 12 00:06:47.761389 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:06:47.766978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:47.786970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:06:47.800976 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:06:47.847429 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:06:47.847567 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:06:47.847596 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:06:47.916749 kernel: raid6: neonx8 gen() 6739 MB/s Jul 12 00:06:47.933740 kernel: raid6: neonx4 gen() 6553 MB/s Jul 12 00:06:47.950737 kernel: raid6: neonx2 gen() 5450 MB/s Jul 12 00:06:47.967741 kernel: raid6: neonx1 gen() 3955 MB/s Jul 12 00:06:47.984729 kernel: raid6: int64x8 gen() 3821 MB/s Jul 12 00:06:48.001729 kernel: raid6: int64x4 gen() 3707 MB/s Jul 12 00:06:48.018726 kernel: raid6: int64x2 gen() 3604 MB/s Jul 12 00:06:48.036724 kernel: raid6: int64x1 gen() 2765 MB/s Jul 12 00:06:48.036765 kernel: raid6: using algorithm neonx8 gen() 6739 MB/s Jul 12 00:06:48.055742 kernel: raid6: .... xor() 4859 MB/s, rmw enabled Jul 12 00:06:48.055817 kernel: raid6: using neon recovery algorithm Jul 12 00:06:48.065062 kernel: xor: measuring software checksum speed Jul 12 00:06:48.065133 kernel: 8regs : 10971 MB/sec Jul 12 00:06:48.066294 kernel: 32regs : 11947 MB/sec Jul 12 00:06:48.067597 kernel: arm64_neon : 9574 MB/sec Jul 12 00:06:48.067629 kernel: xor: using function: 32regs (11947 MB/sec) Jul 12 00:06:48.153797 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:06:48.172741 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:06:48.189966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:48.225334 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 12 00:06:48.233223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:48.249047 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:06:48.279938 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jul 12 00:06:48.344006 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:06:48.355116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:06:48.477865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:48.494534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:06:48.542540 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:06:48.546547 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:06:48.546686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:48.547429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:06:48.560892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:06:48.601132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:06:48.666435 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:06:48.666500 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:06:48.675378 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:06:48.675770 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:06:48.690095 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:80:f3:39:74:3d Jul 12 00:06:48.694447 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:06:48.718187 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:06:48.719239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:48.727320 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:06:48.731267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:06:48.732973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:48.736994 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:48.752736 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:06:48.752831 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:06:48.754207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:48.765727 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:06:48.773726 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:06:48.773795 kernel: GPT:9289727 != 16777215 Jul 12 00:06:48.773821 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:06:48.773856 kernel: GPT:9289727 != 16777215 Jul 12 00:06:48.773881 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:06:48.773905 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:48.787003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:48.802214 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:06:48.847181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:48.891861 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jul 12 00:06:48.915757 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (526) Jul 12 00:06:48.975727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 12 00:06:49.013175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 12 00:06:49.029641 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 12 00:06:49.032314 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 12 00:06:49.047436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 12 00:06:49.063074 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:06:49.079127 disk-uuid[661]: Primary Header is updated. Jul 12 00:06:49.079127 disk-uuid[661]: Secondary Entries is updated. Jul 12 00:06:49.079127 disk-uuid[661]: Secondary Header is updated. Jul 12 00:06:49.088878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:50.110721 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:50.113921 disk-uuid[662]: The operation has completed successfully. Jul 12 00:06:50.289395 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:06:50.291781 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:06:50.338032 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:06:50.357862 sh[1008]: Success Jul 12 00:06:50.376723 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:06:50.473797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:06:50.492923 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:06:50.502801 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:06:50.545340 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:06:50.545401 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:50.547374 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:06:50.548802 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:06:50.549987 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:06:50.657726 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:06:50.683412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:06:50.688894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:06:50.700119 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:06:50.705968 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:06:50.743063 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:50.743144 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:50.744567 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:06:50.759734 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:06:50.779337 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:06:50.783727 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:50.791261 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:06:50.813191 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:06:50.880617 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:06:50.898966 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:06:50.945892 systemd-networkd[1201]: lo: Link UP Jul 12 00:06:50.945912 systemd-networkd[1201]: lo: Gained carrier Jul 12 00:06:50.949477 systemd-networkd[1201]: Enumeration completed Jul 12 00:06:50.950444 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:50.950451 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:50.951578 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:06:50.957252 systemd-networkd[1201]: eth0: Link UP Jul 12 00:06:50.957261 systemd-networkd[1201]: eth0: Gained carrier Jul 12 00:06:50.957279 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:50.960152 systemd[1]: Reached target network.target - Network. Jul 12 00:06:50.991821 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.18.25/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:06:51.245555 ignition[1141]: Ignition 2.19.0 Jul 12 00:06:51.245582 ignition[1141]: Stage: fetch-offline Jul 12 00:06:51.253967 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:06:51.247250 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.247277 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.248001 ignition[1141]: Ignition finished successfully Jul 12 00:06:51.271604 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:06:51.306267 ignition[1211]: Ignition 2.19.0 Jul 12 00:06:51.306289 ignition[1211]: Stage: fetch Jul 12 00:06:51.307609 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.307638 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.308206 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.332474 ignition[1211]: PUT result: OK Jul 12 00:06:51.336602 ignition[1211]: parsed url from cmdline: "" Jul 12 00:06:51.336638 ignition[1211]: no config URL provided Jul 12 00:06:51.336665 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:06:51.336738 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:06:51.336793 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.339455 ignition[1211]: PUT result: OK Jul 12 00:06:51.339566 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:06:51.346146 ignition[1211]: GET result: OK Jul 12 00:06:51.346307 ignition[1211]: parsing config with SHA512: 72c59bc115d47e1ca9b70453677a944f2fa18d50bb414322dc66cd95215255ab55199ccfc503fc097b08e1b71ac3eb75faab24c227c3f2c03d2b1d9ad1fd36ae Jul 12 00:06:51.361330 unknown[1211]: fetched base config from "system" Jul 12 00:06:51.361680 unknown[1211]: fetched base config from "system" Jul 12 00:06:51.363319 ignition[1211]: fetch: fetch complete Jul 12 00:06:51.361726 unknown[1211]: fetched user config from "aws" Jul 12 00:06:51.363335 ignition[1211]: fetch: fetch passed Jul 12 00:06:51.363478 ignition[1211]: Ignition finished successfully Jul 12 00:06:51.378762 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:06:51.391197 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:06:51.420623 ignition[1217]: Ignition 2.19.0 Jul 12 00:06:51.421196 ignition[1217]: Stage: kargs Jul 12 00:06:51.421986 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.422015 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.422174 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.431505 ignition[1217]: PUT result: OK Jul 12 00:06:51.436954 ignition[1217]: kargs: kargs passed Jul 12 00:06:51.437111 ignition[1217]: Ignition finished successfully Jul 12 00:06:51.441907 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:06:51.455214 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:06:51.481544 ignition[1223]: Ignition 2.19.0 Jul 12 00:06:51.482101 ignition[1223]: Stage: disks Jul 12 00:06:51.483137 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.483163 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.483324 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.492449 ignition[1223]: PUT result: OK Jul 12 00:06:51.497567 ignition[1223]: disks: disks passed Jul 12 00:06:51.497787 ignition[1223]: Ignition finished successfully Jul 12 00:06:51.501199 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:06:51.507612 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:06:51.510282 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:06:51.513015 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:51.515771 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:06:51.527644 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:06:51.539001 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:06:51.590117 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:06:51.595656 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:06:51.610907 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:06:51.705737 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:06:51.707401 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:06:51.709387 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:51.728972 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:06:51.731880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:06:51.741187 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:06:51.741297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:06:51.741347 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:06:51.762336 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:06:51.770081 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:06:51.780730 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Jul 12 00:06:51.784991 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:51.785050 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:51.785078 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:06:51.801740 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:06:51.804072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:06:52.201064 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:06:52.220611 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:06:52.230123 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:06:52.239386 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:06:52.513050 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:06:52.520897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:06:52.525393 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:06:52.553901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:06:52.558751 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:52.588194 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:06:52.612284 ignition[1364]: INFO : Ignition 2.19.0 Jul 12 00:06:52.612284 ignition[1364]: INFO : Stage: mount Jul 12 00:06:52.618001 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:52.618001 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:52.618001 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:52.618001 ignition[1364]: INFO : PUT result: OK Jul 12 00:06:52.630650 ignition[1364]: INFO : mount: mount passed Jul 12 00:06:52.632487 ignition[1364]: INFO : Ignition finished successfully Jul 12 00:06:52.637836 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:06:52.650967 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:06:52.716081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:06:52.748738 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Jul 12 00:06:52.752925 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:52.753001 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:52.753028 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:06:52.759725 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:06:52.763498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:06:52.800761 ignition[1390]: INFO : Ignition 2.19.0 Jul 12 00:06:52.800761 ignition[1390]: INFO : Stage: files Jul 12 00:06:52.804421 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:52.804421 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:52.804421 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:52.812519 ignition[1390]: INFO : PUT result: OK Jul 12 00:06:52.819773 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:06:52.828275 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:06:52.828275 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:06:52.853460 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:06:52.856890 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:06:52.859971 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:06:52.859118 unknown[1390]: wrote ssh authorized keys file for user: core Jul 12 00:06:52.867749 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:06:52.867749 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:06:52.910940 systemd-networkd[1201]: eth0: Gained IPv6LL Jul 12 00:06:52.966008 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:06:53.140584 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:06:53.140584 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:53.149997 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:06:53.930928 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:06:54.333538 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:06:54.333538 ignition[1390]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:06:54.341515 ignition[1390]: INFO : files: files passed Jul 12 00:06:54.341515 ignition[1390]: INFO : Ignition finished successfully Jul 12 00:06:54.372975 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:06:54.392665 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:06:54.401456 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:06:54.411581 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:06:54.413998 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:06:54.443524 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:54.443524 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:54.451997 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:54.458168 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:06:54.464155 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:06:54.476078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:06:54.526625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:06:54.528756 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:06:54.536959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:06:54.539275 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:06:54.541591 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:06:54.551150 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:06:54.597181 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:06:54.613983 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:06:54.638838 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:54.641606 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:54.646828 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:06:54.651396 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:06:54.651639 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:06:54.655055 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:06:54.657800 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:06:54.667217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:06:54.670068 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:06:54.682062 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:06:54.684977 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:06:54.692377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:06:54.695520 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:06:54.698637 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:06:54.704517 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:06:54.708322 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:06:54.708734 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:06:54.720604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:54.723721 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:54.731551 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:06:54.734010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:54.737426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:06:54.737674 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:06:54.748396 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:06:54.748956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:06:54.754193 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:06:54.754417 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:06:54.768994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:06:54.774533 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:06:54.774980 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:54.790036 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:06:54.795411 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:06:54.797761 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:54.814103 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:06:54.814377 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:06:54.833795 ignition[1443]: INFO : Ignition 2.19.0 Jul 12 00:06:54.833795 ignition[1443]: INFO : Stage: umount Jul 12 00:06:54.838264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:06:54.859338 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:54.859338 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:54.859338 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:54.859338 ignition[1443]: INFO : PUT result: OK Jul 12 00:06:54.838532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:06:54.885843 ignition[1443]: INFO : umount: umount passed Jul 12 00:06:54.885843 ignition[1443]: INFO : Ignition finished successfully Jul 12 00:06:54.881282 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:06:54.883602 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:06:54.883955 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:06:54.889590 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:06:54.889841 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:06:54.894654 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:06:54.894870 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:06:54.902637 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:06:54.902795 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:06:54.907067 systemd[1]: Stopped target network.target - Network. Jul 12 00:06:54.910231 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:06:54.910410 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:06:54.922709 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:06:54.924830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:06:54.927298 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:54.930223 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:06:54.932679 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:06:54.935040 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:06:54.935139 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:06:54.937499 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:06:54.937586 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:06:54.941292 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:06:54.941397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:06:54.950900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:06:54.951015 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:06:54.952677 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:06:54.955557 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:06:54.987124 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:06:54.989678 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:06:54.989865 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jul 12 00:06:55.013492 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:06:55.014034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:06:55.027311 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:06:55.027482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:55.047899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:06:55.053245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:06:55.053375 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:06:55.062100 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:06:55.062217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:55.076836 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:06:55.076992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:55.079616 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:06:55.079772 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:55.083306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:55.109630 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:06:55.109911 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:06:55.122587 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:06:55.123899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:06:55.136132 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:06:55.137525 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:55.145640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:06:55.147003 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:55.153860 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:06:55.153965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:55.157274 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:06:55.157406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:06:55.170032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:06:55.170163 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:06:55.173903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:06:55.174416 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:55.194144 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:06:55.196737 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:06:55.196882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:55.201062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:06:55.201195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:55.207731 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:06:55.208000 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:06:55.224468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:06:55.225268 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:06:55.235564 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:06:55.263127 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:06:55.291104 systemd[1]: Switching root. Jul 12 00:06:55.337811 systemd-journald[250]: Journal stopped Jul 12 00:06:57.272844 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jul 12 00:06:57.273762 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:06:57.273807 kernel: SELinux: policy capability open_perms=1 Jul 12 00:06:57.273840 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:06:57.273879 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:06:57.273911 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:06:57.273941 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:06:57.273978 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:06:57.274012 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:06:57.274043 kernel: audit: type=1403 audit(1752278815.587:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:06:57.274081 systemd[1]: Successfully loaded SELinux policy in 55.010ms. Jul 12 00:06:57.274126 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.664ms. Jul 12 00:06:57.274159 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:06:57.274191 systemd[1]: Detected virtualization amazon. Jul 12 00:06:57.274223 systemd[1]: Detected architecture arm64. Jul 12 00:06:57.274263 systemd[1]: Detected first boot. Jul 12 00:06:57.274295 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:06:57.274331 zram_generator::config[1485]: No configuration found. Jul 12 00:06:57.274364 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:06:57.274394 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:06:57.274423 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:06:57.274457 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:57.274491 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:06:57.274526 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:06:57.274557 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:06:57.274592 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:06:57.274623 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:06:57.274655 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:06:57.274685 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:06:57.280841 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:06:57.280881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:57.280913 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:57.280947 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:06:57.280979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:06:57.281019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:06:57.281052 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:06:57.281083 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 00:06:57.281115 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:57.281148 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:06:57.281179 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:06:57.281210 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:57.281244 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:06:57.281274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:57.281306 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:06:57.281339 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:06:57.281371 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:06:57.281401 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:06:57.281431 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:06:57.281464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:57.281495 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:57.281525 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:57.281560 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:06:57.281596 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:06:57.281628 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:06:57.281657 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:06:57.281686 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:06:57.281757 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:06:57.281791 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:06:57.281822 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:06:57.281859 systemd[1]: Reached target machines.target - Containers. Jul 12 00:06:57.281892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:06:57.281921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:57.281955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:06:57.281984 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:06:57.282013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:57.282042 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:06:57.282074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:57.282103 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:06:57.282135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:57.282165 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:06:57.282194 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:06:57.282226 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:06:57.282255 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:06:57.282286 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:06:57.282318 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:06:57.282347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:06:57.282380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:06:57.282412 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:06:57.282442 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:06:57.282476 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:06:57.282508 systemd[1]: Stopped verity-setup.service. Jul 12 00:06:57.282538 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:06:57.282567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:06:57.282595 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:06:57.282635 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:06:57.282668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:06:57.286761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:06:57.286890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:57.286924 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:06:57.286957 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:06:57.286997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:57.287027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:57.287059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:57.287089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:57.287121 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:06:57.287150 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:06:57.287236 systemd-journald[1567]: Collecting audit messages is disabled. Jul 12 00:06:57.287293 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:06:57.287324 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:06:57.287353 kernel: fuse: init (API version 7.39) Jul 12 00:06:57.287385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:06:57.287415 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:57.287449 systemd-journald[1567]: Journal started Jul 12 00:06:57.287503 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec20f6b342176723b6bc2c5ed83b6b33) is 8.0M, max 75.3M, 67.3M free. Jul 12 00:06:56.648363 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:06:56.675013 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 12 00:06:56.675824 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:06:57.293995 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:06:57.310118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:06:57.321055 kernel: loop: module loaded Jul 12 00:06:57.321143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:06:57.328762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:57.341727 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:06:57.358029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:57.358132 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:06:57.379606 kernel: ACPI: bus type drm_connector registered Jul 12 00:06:57.382837 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:06:57.395133 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:06:57.394878 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:06:57.399566 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:06:57.399980 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:06:57.403511 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:06:57.403933 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:06:57.407448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:57.408855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:57.412795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:57.416674 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:06:57.424806 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:06:57.479646 kernel: loop0: detected capacity change from 0 to 114328 Jul 12 00:06:57.506429 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:06:57.510207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:06:57.529442 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:06:57.541238 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:06:57.545008 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:06:57.559426 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:06:57.562294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:57.576039 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:06:57.582390 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:06:57.589996 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:06:57.627740 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:06:57.637571 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:06:57.647931 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:06:57.656743 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec20f6b342176723b6bc2c5ed83b6b33 is 52.857ms for 912 entries. Jul 12 00:06:57.656743 systemd-journald[1567]: System Journal (/var/log/journal/ec20f6b342176723b6bc2c5ed83b6b33) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:06:57.720492 systemd-journald[1567]: Received client request to flush runtime journal. Jul 12 00:06:57.720581 kernel: loop2: detected capacity change from 0 to 207008 Jul 12 00:06:57.725678 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:06:57.755803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:57.778089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:57.791026 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:06:57.817944 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:06:57.834653 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:06:57.847333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:06:57.928105 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Jul 12 00:06:57.928839 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Jul 12 00:06:57.944212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:57.992797 kernel: loop3: detected capacity change from 0 to 52536 Jul 12 00:06:58.129189 kernel: loop4: detected capacity change from 0 to 114328 Jul 12 00:06:58.158234 kernel: loop5: detected capacity change from 0 to 114432 Jul 12 00:06:58.188185 kernel: loop6: detected capacity change from 0 to 207008 Jul 12 00:06:58.225122 kernel: loop7: detected capacity change from 0 to 52536 Jul 12 00:06:58.240062 (sd-merge)[1639]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 12 00:06:58.244393 (sd-merge)[1639]: Merged extensions into '/usr'. Jul 12 00:06:58.256765 systemd[1]: Reloading requested from client PID 1594 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:06:58.256988 systemd[1]: Reloading... Jul 12 00:06:58.388725 ldconfig[1591]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:06:58.451730 zram_generator::config[1666]: No configuration found. Jul 12 00:06:58.767382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:58.879156 systemd[1]: Reloading finished in 620 ms. Jul 12 00:06:58.919306 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:06:58.922325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:06:58.925534 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:06:58.945991 systemd[1]: Starting ensure-sysext.service... Jul 12 00:06:58.949981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:06:58.975190 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:58.985094 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:06:58.985121 systemd[1]: Reloading... Jul 12 00:06:59.023868 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:06:59.024594 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:06:59.031389 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:06:59.032170 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jul 12 00:06:59.032341 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jul 12 00:06:59.043427 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:06:59.043450 systemd-tmpfiles[1721]: Skipping /boot Jul 12 00:06:59.072110 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:06:59.072923 systemd-tmpfiles[1721]: Skipping /boot Jul 12 00:06:59.104199 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Jul 12 00:06:59.202622 zram_generator::config[1749]: No configuration found. Jul 12 00:06:59.374406 (udev-worker)[1756]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:06:59.557824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:59.571188 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1753) Jul 12 00:06:59.746134 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 00:06:59.747375 systemd[1]: Reloading finished in 761 ms. Jul 12 00:06:59.783158 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:59.787070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:59.950614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 12 00:06:59.957769 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:06:59.964603 systemd[1]: Finished ensure-sysext.service. Jul 12 00:06:59.985064 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:59.993049 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:07:00.000642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:00.014810 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:07:00.028131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:00.035398 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:00.042250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:00.050663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:00.053416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:00.067102 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:07:00.074449 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:07:00.083552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:00.093639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:00.096140 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:07:00.102917 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:07:00.136495 lvm[1925]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:00.137185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:00.141458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:00.144204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:00.148065 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:00.167217 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:00.167603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:00.203912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:00.204236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:00.208087 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:00.213253 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:07:00.236233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:00.237272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:00.245765 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:07:00.250062 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:00.267371 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:07:00.291752 lvm[1951]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:00.297674 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:07:00.308128 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:07:00.316175 augenrules[1953]: No rules Jul 12 00:07:00.321959 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:00.333320 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:07:00.337011 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:07:00.341907 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:07:00.352732 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:07:00.363957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:07:00.397507 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:07:00.415938 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:07:00.473810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:00.529181 systemd-networkd[1936]: lo: Link UP Jul 12 00:07:00.529204 systemd-networkd[1936]: lo: Gained carrier Jul 12 00:07:00.531894 systemd-networkd[1936]: Enumeration completed Jul 12 00:07:00.532843 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:00.537114 systemd-networkd[1936]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:00.537136 systemd-networkd[1936]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:00.542378 systemd-networkd[1936]: eth0: Link UP Jul 12 00:07:00.543129 systemd-networkd[1936]: eth0: Gained carrier Jul 12 00:07:00.543164 systemd-networkd[1936]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:00.545914 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:07:00.557835 systemd-networkd[1936]: eth0: DHCPv4 address 172.31.18.25/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:07:00.559545 systemd-resolved[1937]: Positive Trust Anchors: Jul 12 00:07:00.560294 systemd-resolved[1937]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:00.560363 systemd-resolved[1937]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:00.574291 systemd-resolved[1937]: Defaulting to hostname 'linux'. Jul 12 00:07:00.577520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:00.580292 systemd[1]: Reached target network.target - Network. Jul 12 00:07:00.582411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:00.585107 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:00.587593 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:07:00.590449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:07:00.593556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:07:00.596170 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:07:00.599023 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:07:00.601938 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:07:00.602014 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:00.604078 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:00.607275 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:07:00.612427 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:07:00.626297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:07:00.629763 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:07:00.632898 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:00.635340 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:00.637880 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:00.637943 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:00.646024 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:07:00.653161 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:07:00.660926 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:07:00.674117 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:07:00.681112 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:07:00.683559 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:07:00.690594 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:07:00.699140 systemd[1]: Started ntpd.service - Network Time Service. Jul 12 00:07:00.711884 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:07:00.724945 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 12 00:07:00.741923 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:07:00.749212 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:07:00.764105 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:07:00.771129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:07:00.772121 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:07:00.777074 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:07:00.786035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:07:00.820008 jq[1983]: false Jul 12 00:07:00.856913 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:07:00.858818 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:07:00.862490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:07:00.864407 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:07:00.893190 extend-filesystems[1984]: Found loop4 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found loop5 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found loop6 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found loop7 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p1 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p2 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p3 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found usr Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p4 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p6 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p7 Jul 12 00:07:00.893190 extend-filesystems[1984]: Found nvme0n1p9 Jul 12 00:07:00.893190 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Jul 12 00:07:00.921370 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:07:00.971097 jq[1995]: true Jul 12 00:07:00.921030 dbus-daemon[1982]: [system] SELinux support is enabled Jul 12 00:07:00.942582 dbus-daemon[1982]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1936 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:07:00.972993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:07:00.984647 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:07:00.973064 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:07:00.976887 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:07:01.027283 tar[2004]: linux-arm64/LICENSE Jul 12 00:07:01.027283 tar[2004]: linux-arm64/helm Jul 12 00:07:00.976928 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:07:01.001836 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 12 00:07:01.066887 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Jul 12 00:07:01.080525 extend-filesystems[2027]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:07:01.104220 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:07:01.104360 jq[2014]: true Jul 12 00:07:01.096981 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri Jul 11 22:05:17 UTC 2025 (1): Starting Jul 12 00:07:01.100459 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri Jul 11 22:05:17 UTC 2025 (1): Starting Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: ---------------------------------------------------- Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: corporation. Support and training for ntp-4 are Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: available at https://www.nwtime.org/support Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: ---------------------------------------------------- Jul 12 00:07:01.110746 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: proto: precision = 0.096 usec (-23) Jul 12 00:07:01.097030 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 12 00:07:01.108745 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:07:01.123267 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: basedate set to 2025-06-29 Jul 12 00:07:01.123267 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: gps base set to 2025-06-29 (week 2373) Jul 12 00:07:01.097051 ntpd[1986]: ---------------------------------------------------- Jul 12 00:07:01.110912 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:07:01.097097 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Jul 12 00:07:01.097121 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 12 00:07:01.097140 ntpd[1986]: corporation. Support and training for ntp-4 are Jul 12 00:07:01.097160 ntpd[1986]: available at https://www.nwtime.org/support Jul 12 00:07:01.097183 ntpd[1986]: ---------------------------------------------------- Jul 12 00:07:01.107519 ntpd[1986]: proto: precision = 0.096 usec (-23) Jul 12 00:07:01.119241 ntpd[1986]: basedate set to 2025-06-29 Jul 12 00:07:01.119275 ntpd[1986]: gps base set to 2025-06-29 (week 2373) Jul 12 00:07:01.126589 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Jul 12 00:07:01.127084 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Jul 12 00:07:01.149735 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 12 00:07:01.149954 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 12 00:07:01.150096 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Listen normally on 3 eth0 172.31.18.25:123 Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Listen normally on 4 lo [::1]:123 Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: bind(21) AF_INET6 fe80::480:f3ff:fe39:743d%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: unable to create socket on eth0 (5) for fe80::480:f3ff:fe39:743d%2#123 Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: failed to init interface for address fe80::480:f3ff:fe39:743d%2 Jul 12 00:07:01.150541 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Jul 12 00:07:01.150187 ntpd[1986]: Listen normally on 3 eth0 172.31.18.25:123 Jul 12 00:07:01.150266 ntpd[1986]: Listen normally on 4 lo [::1]:123 Jul 12 00:07:01.150364 ntpd[1986]: bind(21) AF_INET6 fe80::480:f3ff:fe39:743d%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:01.150405 ntpd[1986]: unable to create socket on eth0 (5) for fe80::480:f3ff:fe39:743d%2#123 Jul 12 00:07:01.150437 ntpd[1986]: failed to init interface for address fe80::480:f3ff:fe39:743d%2 Jul 12 00:07:01.150496 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Jul 12 00:07:01.191775 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:07:01.191864 update_engine[1994]: I20250712 00:07:01.187171 1994 main.cc:92] Flatcar Update Engine starting Jul 12 00:07:01.201750 extend-filesystems[2027]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:07:01.201750 extend-filesystems[2027]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:07:01.201750 extend-filesystems[2027]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:07:01.223114 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:07:01.234377 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:01.234377 ntpd[1986]: 12 Jul 00:07:01 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:01.231482 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:01.231550 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:01.244259 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:07:01.260992 update_engine[1994]: I20250712 00:07:01.248522 1994 update_check_scheduler.cc:74] Next update check in 10m17s Jul 12 00:07:01.245837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:07:01.249154 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 12 00:07:01.251869 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:07:01.260914 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:07:01.274040 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:07:01.283826 coreos-metadata[1981]: Jul 12 00:07:01.282 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:07:01.286723 coreos-metadata[1981]: Jul 12 00:07:01.286 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 12 00:07:01.295869 coreos-metadata[1981]: Jul 12 00:07:01.292 INFO Fetch successful Jul 12 00:07:01.295869 coreos-metadata[1981]: Jul 12 00:07:01.292 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 12 00:07:01.299104 coreos-metadata[1981]: Jul 12 00:07:01.298 INFO Fetch successful Jul 12 00:07:01.299104 coreos-metadata[1981]: Jul 12 00:07:01.298 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 12 00:07:01.303867 coreos-metadata[1981]: Jul 12 00:07:01.301 INFO Fetch successful Jul 12 00:07:01.303867 coreos-metadata[1981]: Jul 12 00:07:01.301 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 12 00:07:01.312955 coreos-metadata[1981]: Jul 12 00:07:01.305 INFO Fetch successful Jul 12 00:07:01.312955 coreos-metadata[1981]: Jul 12 00:07:01.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 12 00:07:01.312955 coreos-metadata[1981]: Jul 12 00:07:01.310 INFO Fetch failed with 404: resource not found Jul 12 00:07:01.312955 coreos-metadata[1981]: Jul 12 00:07:01.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 12 00:07:01.315528 coreos-metadata[1981]: Jul 12 00:07:01.315 INFO Fetch successful Jul 12 00:07:01.315528 coreos-metadata[1981]: Jul 12 00:07:01.315 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 12 00:07:01.321273 coreos-metadata[1981]: Jul 12 00:07:01.317 INFO Fetch successful Jul 12 00:07:01.321273 coreos-metadata[1981]: Jul 12 00:07:01.318 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 12 00:07:01.321741 coreos-metadata[1981]: Jul 12 00:07:01.321 INFO Fetch successful Jul 12 00:07:01.321844 coreos-metadata[1981]: Jul 12 00:07:01.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 12 00:07:01.326934 coreos-metadata[1981]: Jul 12 00:07:01.322 INFO Fetch successful Jul 12 00:07:01.326934 coreos-metadata[1981]: Jul 12 00:07:01.322 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 12 00:07:01.327125 coreos-metadata[1981]: Jul 12 00:07:01.326 INFO Fetch successful Jul 12 00:07:01.433793 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1753) Jul 12 00:07:01.441836 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:01.445352 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:07:01.456044 systemd[1]: Starting sshkeys.service... Jul 12 00:07:01.460031 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:07:01.463042 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:07:01.525609 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:07:01.536281 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:07:01.586285 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:07:01.586344 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:07:01.591415 systemd-logind[1993]: New seat seat0. Jul 12 00:07:01.603541 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:07:01.857867 containerd[2015]: time="2025-07-12T00:07:01.857126148Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:07:01.920517 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:07:01.921366 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 12 00:07:01.931891 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:07:01.964538 systemd[1]: Starting polkit.service - Authorization Manager... Jul 12 00:07:01.994629 coreos-metadata[2072]: Jul 12 00:07:01.992 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:07:01.997174 coreos-metadata[2072]: Jul 12 00:07:01.996 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 12 00:07:01.997118 polkitd[2151]: Started polkitd version 121 Jul 12 00:07:02.001743 coreos-metadata[2072]: Jul 12 00:07:01.998 INFO Fetch successful Jul 12 00:07:02.001743 coreos-metadata[2072]: Jul 12 00:07:01.998 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:07:02.004862 coreos-metadata[2072]: Jul 12 00:07:02.003 INFO Fetch successful Jul 12 00:07:02.010844 unknown[2072]: wrote ssh authorized keys file for user: core Jul 12 00:07:02.040192 containerd[2015]: time="2025-07-12T00:07:02.040131417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052151109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052227777Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052263597Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052588833Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052629489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052798485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.052834197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.053176749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.053213961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.053250261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:02.054855 containerd[2015]: time="2025-07-12T00:07:02.053277345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.055526 containerd[2015]: time="2025-07-12T00:07:02.053458377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.062639 polkitd[2151]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:07:02.062825 polkitd[2151]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:07:02.067358 polkitd[2151]: Finished loading, compiling and executing 2 rules Jul 12 00:07:02.072462 containerd[2015]: time="2025-07-12T00:07:02.068806341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:02.072462 containerd[2015]: time="2025-07-12T00:07:02.070962189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:02.072462 containerd[2015]: time="2025-07-12T00:07:02.071777541Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:07:02.074108 containerd[2015]: time="2025-07-12T00:07:02.074056893Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:07:02.074130 locksmithd[2042]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:07:02.075296 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:07:02.076920 systemd[1]: Started polkit.service - Authorization Manager. Jul 12 00:07:02.075948 polkitd[2151]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:07:02.081113 containerd[2015]: time="2025-07-12T00:07:02.080751393Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:07:02.100192 ntpd[1986]: bind(24) AF_INET6 fe80::480:f3ff:fe39:743d%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:02.109226 update-ssh-keys[2159]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.099780633Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.099882393Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.099920757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.099957081Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.099997929Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.100257789Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.100888161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101138241Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101177661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101209173Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101243889Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101275761Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101309025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.109576 containerd[2015]: time="2025-07-12T00:07:02.101342661Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.128057 ntpd[1986]: 12 Jul 00:07:02 ntpd[1986]: bind(24) AF_INET6 fe80::480:f3ff:fe39:743d%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:02.128057 ntpd[1986]: 12 Jul 00:07:02 ntpd[1986]: unable to create socket on eth0 (6) for fe80::480:f3ff:fe39:743d%2#123 Jul 12 00:07:02.128057 ntpd[1986]: 12 Jul 00:07:02 ntpd[1986]: failed to init interface for address fe80::480:f3ff:fe39:743d%2 Jul 12 00:07:02.100249 ntpd[1986]: unable to create socket on eth0 (6) for fe80::480:f3ff:fe39:743d%2#123 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101375685Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101407209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101438205Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101466237Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101508789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101540433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101569653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101601309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.128430 containerd[2015]: time="2025-07-12T00:07:02.101638737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.112788 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:07:02.100280 ntpd[1986]: failed to init interface for address fe80::480:f3ff:fe39:743d%2 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.101670273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.134940789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.138886257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.138956337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.139022877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.139064421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.139235685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.139906065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.139999521Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:07:02.140145 containerd[2015]: time="2025-07-12T00:07:02.140076693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.140805 systemd[1]: Finished sshkeys.service. Jul 12 00:07:02.147266 containerd[2015]: time="2025-07-12T00:07:02.140109681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.147266 containerd[2015]: time="2025-07-12T00:07:02.146853849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150273633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150373317Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150408681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150473805Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150501141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150577941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150629877Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:07:02.154424 containerd[2015]: time="2025-07-12T00:07:02.150660585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:07:02.155387 systemd-hostnamed[2022]: Hostname set to (transient) Jul 12 00:07:02.155389 systemd-resolved[1937]: System hostname changed to 'ip-172-31-18-25'. Jul 12 00:07:02.156270 containerd[2015]: time="2025-07-12T00:07:02.154993521Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:07:02.157068 containerd[2015]: time="2025-07-12T00:07:02.156605121Z" level=info msg="Connect containerd service" Jul 12 00:07:02.157068 containerd[2015]: time="2025-07-12T00:07:02.156734385Z" level=info msg="using legacy CRI server" Jul 12 00:07:02.157068 containerd[2015]: time="2025-07-12T00:07:02.156758373Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:07:02.162012 containerd[2015]: time="2025-07-12T00:07:02.159734013Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:07:02.164513 containerd[2015]: time="2025-07-12T00:07:02.164460969Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167017857Z" level=info msg="Start subscribing containerd event" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167114133Z" level=info msg="Start recovering state" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167242737Z" level=info msg="Start event monitor" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167269149Z" level=info msg="Start snapshots syncer" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167291925Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167312709Z" level=info msg="Start streaming server" Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167566749Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:07:02.169728 containerd[2015]: time="2025-07-12T00:07:02.167660181Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:07:02.186576 containerd[2015]: time="2025-07-12T00:07:02.184031541Z" level=info msg="containerd successfully booted in 0.328304s" Jul 12 00:07:02.184893 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:07:02.574975 systemd-networkd[1936]: eth0: Gained IPv6LL Jul 12 00:07:02.583027 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:07:02.587318 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:07:02.604221 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 12 00:07:02.617034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:02.624615 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:07:02.732044 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:07:02.785071 amazon-ssm-agent[2186]: Initializing new seelog logger Jul 12 00:07:02.787755 amazon-ssm-agent[2186]: New Seelog Logger Creation Complete Jul 12 00:07:02.788037 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.788735 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.789321 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 processing appconfig overrides Jul 12 00:07:02.790970 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.791225 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.791426 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 processing appconfig overrides Jul 12 00:07:02.791937 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO Proxy environment variables: Jul 12 00:07:02.793127 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.793127 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.793127 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 processing appconfig overrides Jul 12 00:07:02.801713 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.801713 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:02.801713 amazon-ssm-agent[2186]: 2025/07/12 00:07:02 processing appconfig overrides Jul 12 00:07:02.857784 tar[2004]: linux-arm64/README.md Jul 12 00:07:02.892800 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO https_proxy: Jul 12 00:07:02.901423 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:07:02.993308 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO http_proxy: Jul 12 00:07:03.093720 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO no_proxy: Jul 12 00:07:03.098116 sshd_keygen[2011]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:07:03.147175 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:07:03.159523 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:07:03.174231 systemd[1]: Started sshd@0-172.31.18.25:22-139.178.89.65:37524.service - OpenSSH per-connection server daemon (139.178.89.65:37524). Jul 12 00:07:03.190938 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO Checking if agent identity type OnPrem can be assumed Jul 12 00:07:03.198274 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:07:03.198851 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:07:03.213180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:07:03.259846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:07:03.273445 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:07:03.284273 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 00:07:03.287232 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:07:03.296111 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO Checking if agent identity type EC2 can be assumed Jul 12 00:07:03.394571 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO Agent will take identity from EC2 Jul 12 00:07:03.423576 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 37524 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:03.426203 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:03.452577 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:07:03.462944 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:07:03.474007 systemd-logind[1993]: New session 1 of user core. Jul 12 00:07:03.493811 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:03.508757 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:07:03.527342 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:07:03.552216 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:07:03.593307 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:03.692841 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:03.790417 systemd[2226]: Queued start job for default target default.target. Jul 12 00:07:03.792080 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 12 00:07:03.803569 systemd[2226]: Created slice app.slice - User Application Slice. Jul 12 00:07:03.804305 systemd[2226]: Reached target paths.target - Paths. Jul 12 00:07:03.804498 systemd[2226]: Reached target timers.target - Timers. Jul 12 00:07:03.808946 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:07:03.839231 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:07:03.839785 systemd[2226]: Reached target sockets.target - Sockets. Jul 12 00:07:03.840005 systemd[2226]: Reached target basic.target - Basic System. Jul 12 00:07:03.840096 systemd[2226]: Reached target default.target - Main User Target. Jul 12 00:07:03.840169 systemd[2226]: Startup finished in 273ms. Jul 12 00:07:03.840889 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:07:03.851961 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:07:03.892409 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 12 00:07:03.994157 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] Starting Core Agent Jul 12 00:07:04.022250 systemd[1]: Started sshd@1-172.31.18.25:22-139.178.89.65:37534.service - OpenSSH per-connection server daemon (139.178.89.65:37534). Jul 12 00:07:04.092338 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 12 00:07:04.192496 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [Registrar] Starting registrar module Jul 12 00:07:04.239009 amazon-ssm-agent[2186]: 2025-07-12 00:07:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 12 00:07:04.239009 amazon-ssm-agent[2186]: 2025-07-12 00:07:04 INFO [EC2Identity] EC2 registration was successful. Jul 12 00:07:04.239396 amazon-ssm-agent[2186]: 2025-07-12 00:07:04 INFO [CredentialRefresher] credentialRefresher has started Jul 12 00:07:04.239396 amazon-ssm-agent[2186]: 2025-07-12 00:07:04 INFO [CredentialRefresher] Starting credentials refresher loop Jul 12 00:07:04.239396 amazon-ssm-agent[2186]: 2025-07-12 00:07:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 12 00:07:04.241489 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 37534 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:04.244263 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:04.252743 systemd-logind[1993]: New session 2 of user core. Jul 12 00:07:04.262026 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:07:04.293012 amazon-ssm-agent[2186]: 2025-07-12 00:07:04 INFO [CredentialRefresher] Next credential rotation will be in 31.066656014366668 minutes Jul 12 00:07:04.395968 sshd[2239]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:04.403630 systemd[1]: sshd@1-172.31.18.25:22-139.178.89.65:37534.service: Deactivated successfully. Jul 12 00:07:04.407255 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:07:04.408576 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:07:04.411905 systemd-logind[1993]: Removed session 2. Jul 12 00:07:04.435868 systemd[1]: Started sshd@2-172.31.18.25:22-139.178.89.65:37550.service - OpenSSH per-connection server daemon (139.178.89.65:37550). Jul 12 00:07:04.618598 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 37550 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:04.621583 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:04.635024 systemd-logind[1993]: New session 3 of user core. Jul 12 00:07:04.641078 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:07:04.776077 sshd[2246]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:04.782369 systemd[1]: sshd@2-172.31.18.25:22-139.178.89.65:37550.service: Deactivated successfully. Jul 12 00:07:04.788623 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:07:04.792745 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:07:04.794640 systemd-logind[1993]: Removed session 3. Jul 12 00:07:05.100185 ntpd[1986]: Listen normally on 7 eth0 [fe80::480:f3ff:fe39:743d%2]:123 Jul 12 00:07:05.100778 ntpd[1986]: 12 Jul 00:07:05 ntpd[1986]: Listen normally on 7 eth0 [fe80::480:f3ff:fe39:743d%2]:123 Jul 12 00:07:05.271877 amazon-ssm-agent[2186]: 2025-07-12 00:07:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 12 00:07:05.372328 amazon-ssm-agent[2186]: 2025-07-12 00:07:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2253) started Jul 12 00:07:05.472917 amazon-ssm-agent[2186]: 2025-07-12 00:07:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 12 00:07:06.801023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:06.804762 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:07:06.807684 systemd[1]: Startup finished in 1.178s (kernel) + 8.769s (initrd) + 11.275s (userspace) = 21.223s. Jul 12 00:07:06.816416 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:08.267664 systemd-resolved[1937]: Clock change detected. Flushing caches. Jul 12 00:07:08.929829 kubelet[2268]: E0712 00:07:08.929696 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:08.933124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:08.933576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:08.934498 systemd[1]: kubelet.service: Consumed 1.444s CPU time. Jul 12 00:07:14.982004 systemd[1]: Started sshd@3-172.31.18.25:22-139.178.89.65:60486.service - OpenSSH per-connection server daemon (139.178.89.65:60486). Jul 12 00:07:15.154602 sshd[2280]: Accepted publickey for core from 139.178.89.65 port 60486 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:15.157242 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:15.165542 systemd-logind[1993]: New session 4 of user core. Jul 12 00:07:15.173787 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:07:15.301756 sshd[2280]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:15.308872 systemd[1]: sshd@3-172.31.18.25:22-139.178.89.65:60486.service: Deactivated successfully. Jul 12 00:07:15.312367 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:07:15.314056 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:07:15.316602 systemd-logind[1993]: Removed session 4. Jul 12 00:07:15.344045 systemd[1]: Started sshd@4-172.31.18.25:22-139.178.89.65:60494.service - OpenSSH per-connection server daemon (139.178.89.65:60494). Jul 12 00:07:15.523275 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 60494 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:15.526342 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:15.534935 systemd-logind[1993]: New session 5 of user core. Jul 12 00:07:15.543789 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:07:15.663127 sshd[2287]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:15.669365 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:07:15.671890 systemd[1]: sshd@4-172.31.18.25:22-139.178.89.65:60494.service: Deactivated successfully. Jul 12 00:07:15.675282 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:07:15.677373 systemd-logind[1993]: Removed session 5. Jul 12 00:07:15.704994 systemd[1]: Started sshd@5-172.31.18.25:22-139.178.89.65:60502.service - OpenSSH per-connection server daemon (139.178.89.65:60502). Jul 12 00:07:15.877807 sshd[2294]: Accepted publickey for core from 139.178.89.65 port 60502 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:15.880403 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:15.889335 systemd-logind[1993]: New session 6 of user core. Jul 12 00:07:15.900748 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:07:16.026170 sshd[2294]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:16.032132 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:07:16.035021 systemd[1]: sshd@5-172.31.18.25:22-139.178.89.65:60502.service: Deactivated successfully. Jul 12 00:07:16.038191 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:07:16.040757 systemd-logind[1993]: Removed session 6. Jul 12 00:07:16.067013 systemd[1]: Started sshd@6-172.31.18.25:22-139.178.89.65:60516.service - OpenSSH per-connection server daemon (139.178.89.65:60516). Jul 12 00:07:16.238527 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 60516 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:16.241152 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:16.250751 systemd-logind[1993]: New session 7 of user core. Jul 12 00:07:16.260796 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:07:16.383632 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:07:16.384865 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:16.405191 sudo[2304]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:16.429862 sshd[2301]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:16.437963 systemd[1]: sshd@6-172.31.18.25:22-139.178.89.65:60516.service: Deactivated successfully. Jul 12 00:07:16.441601 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:07:16.443092 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:07:16.445337 systemd-logind[1993]: Removed session 7. Jul 12 00:07:16.470984 systemd[1]: Started sshd@7-172.31.18.25:22-139.178.89.65:60528.service - OpenSSH per-connection server daemon (139.178.89.65:60528). Jul 12 00:07:16.647985 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 60528 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:16.650973 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:16.660535 systemd-logind[1993]: New session 8 of user core. Jul 12 00:07:16.666785 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:07:16.771547 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:07:16.772171 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:16.779269 sudo[2313]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:16.789473 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:07:16.790141 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:16.811121 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:16.824501 auditctl[2316]: No rules Jul 12 00:07:16.826284 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:07:16.827580 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:16.834122 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:16.891171 augenrules[2334]: No rules Jul 12 00:07:16.894602 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:16.896682 sudo[2312]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:16.920615 sshd[2309]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:16.928619 systemd[1]: sshd@7-172.31.18.25:22-139.178.89.65:60528.service: Deactivated successfully. Jul 12 00:07:16.933778 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:07:16.936992 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:07:16.939233 systemd-logind[1993]: Removed session 8. Jul 12 00:07:16.959995 systemd[1]: Started sshd@8-172.31.18.25:22-139.178.89.65:60542.service - OpenSSH per-connection server daemon (139.178.89.65:60542). Jul 12 00:07:17.041125 systemd[1]: Started sshd@9-172.31.18.25:22-172.105.128.12:2048.service - OpenSSH per-connection server daemon (172.105.128.12:2048). Jul 12 00:07:17.140029 sshd[2342]: Accepted publickey for core from 139.178.89.65 port 60542 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:17.142647 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:17.150951 systemd-logind[1993]: New session 9 of user core. Jul 12 00:07:17.161818 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:07:17.264875 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:07:17.266312 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:17.780959 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:07:17.795155 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:07:18.228480 dockerd[2363]: time="2025-07-12T00:07:18.228354653Z" level=info msg="Starting up" Jul 12 00:07:18.376987 sshd[2345]: Connection closed by 172.105.128.12 port 2048 [preauth] Jul 12 00:07:18.378578 systemd[1]: var-lib-docker-metacopy\x2dcheck4106867724-merged.mount: Deactivated successfully. Jul 12 00:07:18.382938 systemd[1]: sshd@9-172.31.18.25:22-172.105.128.12:2048.service: Deactivated successfully. Jul 12 00:07:18.397609 dockerd[2363]: time="2025-07-12T00:07:18.397550850Z" level=info msg="Loading containers: start." Jul 12 00:07:18.490687 systemd[1]: Started sshd@10-172.31.18.25:22-172.105.128.12:2056.service - OpenSSH per-connection server daemon (172.105.128.12:2056). Jul 12 00:07:18.581492 kernel: Initializing XFRM netlink socket Jul 12 00:07:18.620255 (udev-worker)[2388]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:07:18.715346 systemd-networkd[1936]: docker0: Link UP Jul 12 00:07:18.747773 dockerd[2363]: time="2025-07-12T00:07:18.747278816Z" level=info msg="Loading containers: done." Jul 12 00:07:18.771625 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck283564614-merged.mount: Deactivated successfully. Jul 12 00:07:18.784751 dockerd[2363]: time="2025-07-12T00:07:18.784670984Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:07:18.785507 dockerd[2363]: time="2025-07-12T00:07:18.785106908Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:07:18.785507 dockerd[2363]: time="2025-07-12T00:07:18.785362316Z" level=info msg="Daemon has completed initialization" Jul 12 00:07:18.867177 dockerd[2363]: time="2025-07-12T00:07:18.866885336Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:07:18.869350 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:07:19.097945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:07:19.107119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:19.532872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:19.537914 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:19.659821 kubelet[2514]: E0712 00:07:19.659647 2514 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:19.672593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:19.673174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:19.984281 sshd[2406]: Connection closed by 172.105.128.12 port 2056 [preauth] Jul 12 00:07:19.986547 systemd[1]: sshd@10-172.31.18.25:22-172.105.128.12:2056.service: Deactivated successfully. Jul 12 00:07:20.109021 containerd[2015]: time="2025-07-12T00:07:20.108535446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:07:20.140024 systemd[1]: Started sshd@11-172.31.18.25:22-172.105.128.12:2060.service - OpenSSH per-connection server daemon (172.105.128.12:2060). Jul 12 00:07:20.799826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976999818.mount: Deactivated successfully. Jul 12 00:07:21.642948 sshd[2525]: Connection closed by 172.105.128.12 port 2060 [preauth] Jul 12 00:07:21.646806 systemd[1]: sshd@11-172.31.18.25:22-172.105.128.12:2060.service: Deactivated successfully. Jul 12 00:07:22.206918 containerd[2015]: time="2025-07-12T00:07:22.206835201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:22.209144 containerd[2015]: time="2025-07-12T00:07:22.209065977Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jul 12 00:07:22.211316 containerd[2015]: time="2025-07-12T00:07:22.211213281Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:22.217256 containerd[2015]: time="2025-07-12T00:07:22.217130277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:22.219656 containerd[2015]: time="2025-07-12T00:07:22.219598629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.110987139s" Jul 12 00:07:22.220160 containerd[2015]: time="2025-07-12T00:07:22.219803361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:07:22.221400 containerd[2015]: time="2025-07-12T00:07:22.221227857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:07:23.687209 containerd[2015]: time="2025-07-12T00:07:23.687018072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:23.690232 containerd[2015]: time="2025-07-12T00:07:23.690093420Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jul 12 00:07:23.691567 containerd[2015]: time="2025-07-12T00:07:23.691431288Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:23.699690 containerd[2015]: time="2025-07-12T00:07:23.699525564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:23.703269 containerd[2015]: time="2025-07-12T00:07:23.702942804Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.481375179s" Jul 12 00:07:23.703269 containerd[2015]: time="2025-07-12T00:07:23.703046532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:07:23.704740 containerd[2015]: time="2025-07-12T00:07:23.704152500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:07:24.884534 containerd[2015]: time="2025-07-12T00:07:24.883408142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:24.885943 containerd[2015]: time="2025-07-12T00:07:24.885862838Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jul 12 00:07:24.887679 containerd[2015]: time="2025-07-12T00:07:24.887588186Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:24.899509 containerd[2015]: time="2025-07-12T00:07:24.898919630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:24.901972 containerd[2015]: time="2025-07-12T00:07:24.901896398Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.197631998s" Jul 12 00:07:24.902235 containerd[2015]: time="2025-07-12T00:07:24.902193194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:07:24.903091 containerd[2015]: time="2025-07-12T00:07:24.903000362Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:07:26.202351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136511766.mount: Deactivated successfully. Jul 12 00:07:26.723750 containerd[2015]: time="2025-07-12T00:07:26.722362587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:26.725111 containerd[2015]: time="2025-07-12T00:07:26.725050491Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jul 12 00:07:26.726298 containerd[2015]: time="2025-07-12T00:07:26.726241815Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:26.730017 containerd[2015]: time="2025-07-12T00:07:26.729945231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:26.731808 containerd[2015]: time="2025-07-12T00:07:26.731747583Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.828496301s" Jul 12 00:07:26.732035 containerd[2015]: time="2025-07-12T00:07:26.731995443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:07:26.733225 containerd[2015]: time="2025-07-12T00:07:26.733042851Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:07:27.303664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209836532.mount: Deactivated successfully. Jul 12 00:07:28.646544 containerd[2015]: time="2025-07-12T00:07:28.646053317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.648645 containerd[2015]: time="2025-07-12T00:07:28.648550877Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 12 00:07:28.651312 containerd[2015]: time="2025-07-12T00:07:28.651214061Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.658285 containerd[2015]: time="2025-07-12T00:07:28.658166837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.661228 containerd[2015]: time="2025-07-12T00:07:28.660953993Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.92752143s" Jul 12 00:07:28.661228 containerd[2015]: time="2025-07-12T00:07:28.661027889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:07:28.661995 containerd[2015]: time="2025-07-12T00:07:28.661942253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:07:29.194519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824425249.mount: Deactivated successfully. Jul 12 00:07:29.208141 containerd[2015]: time="2025-07-12T00:07:29.208045828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:29.211220 containerd[2015]: time="2025-07-12T00:07:29.211135624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 12 00:07:29.213819 containerd[2015]: time="2025-07-12T00:07:29.213726340Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:29.219012 containerd[2015]: time="2025-07-12T00:07:29.218893240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:29.221027 containerd[2015]: time="2025-07-12T00:07:29.220788628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 558.645759ms" Jul 12 00:07:29.221027 containerd[2015]: time="2025-07-12T00:07:29.220852816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:07:29.222109 containerd[2015]: time="2025-07-12T00:07:29.221808352Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:07:29.769060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:07:29.779768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:29.815124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552745559.mount: Deactivated successfully. Jul 12 00:07:30.482160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:30.500997 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:30.618760 kubelet[2676]: E0712 00:07:30.618651 2676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:30.629092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:30.629847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:32.347995 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:07:32.547064 containerd[2015]: time="2025-07-12T00:07:32.546981296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:32.549437 containerd[2015]: time="2025-07-12T00:07:32.549363680Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 12 00:07:32.551600 containerd[2015]: time="2025-07-12T00:07:32.551542256Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:32.558326 containerd[2015]: time="2025-07-12T00:07:32.558223688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:32.560813 containerd[2015]: time="2025-07-12T00:07:32.560760428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.33889384s" Jul 12 00:07:32.561298 containerd[2015]: time="2025-07-12T00:07:32.560968088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:07:40.847392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:07:40.857670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:41.230894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:41.241224 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:41.353690 kubelet[2756]: E0712 00:07:41.353601 2756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:41.361426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:41.361862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:41.750170 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:41.759051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:41.819237 systemd[1]: Reloading requested from client PID 2770 ('systemctl') (unit session-9.scope)... Jul 12 00:07:41.819293 systemd[1]: Reloading... Jul 12 00:07:42.129500 zram_generator::config[2810]: No configuration found. Jul 12 00:07:42.381782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:42.555377 systemd[1]: Reloading finished in 735 ms. Jul 12 00:07:42.658939 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:42.666845 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:42.667224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:42.674113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:42.991624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:43.008027 (kubelet)[2875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:43.081376 kubelet[2875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:43.081376 kubelet[2875]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:43.081376 kubelet[2875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:43.081976 kubelet[2875]: I0712 00:07:43.081433 2875 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:45.052731 kubelet[2875]: I0712 00:07:45.052674 2875 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:07:45.055504 kubelet[2875]: I0712 00:07:45.053347 2875 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:45.055504 kubelet[2875]: I0712 00:07:45.053911 2875 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:07:45.103744 kubelet[2875]: E0712 00:07:45.103672 2875 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:45.107403 kubelet[2875]: I0712 00:07:45.107356 2875 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:45.119806 kubelet[2875]: E0712 00:07:45.119742 2875 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:45.120039 kubelet[2875]: I0712 00:07:45.120010 2875 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:45.125890 kubelet[2875]: I0712 00:07:45.125850 2875 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:45.126572 kubelet[2875]: I0712 00:07:45.126522 2875 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:45.127032 kubelet[2875]: I0712 00:07:45.126701 2875 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:45.127390 kubelet[2875]: I0712 00:07:45.127366 2875 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:45.127544 kubelet[2875]: I0712 00:07:45.127525 2875 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:07:45.128012 kubelet[2875]: I0712 00:07:45.127986 2875 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:45.134546 kubelet[2875]: I0712 00:07:45.134500 2875 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:07:45.134756 kubelet[2875]: I0712 00:07:45.134734 2875 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:45.134870 kubelet[2875]: I0712 00:07:45.134851 2875 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:07:45.134972 kubelet[2875]: I0712 00:07:45.134952 2875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:45.137937 kubelet[2875]: W0712 00:07:45.137849 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-25&limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:45.138092 kubelet[2875]: E0712 00:07:45.137958 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-25&limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:45.140543 kubelet[2875]: I0712 00:07:45.140504 2875 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:45.141839 kubelet[2875]: I0712 00:07:45.141785 2875 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:45.144044 kubelet[2875]: W0712 00:07:45.142220 2875 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:07:45.144044 kubelet[2875]: I0712 00:07:45.143912 2875 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:07:45.144044 kubelet[2875]: I0712 00:07:45.143968 2875 server.go:1287] "Started kubelet" Jul 12 00:07:45.144277 kubelet[2875]: W0712 00:07:45.144184 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:45.144277 kubelet[2875]: E0712 00:07:45.144262 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:45.153763 kubelet[2875]: E0712 00:07:45.153218 2875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.25:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-25.18515858085432c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-25,UID:ip-172-31-18-25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-25,},FirstTimestamp:2025-07-12 00:07:45.143935687 +0000 UTC m=+2.129628684,LastTimestamp:2025-07-12 00:07:45.143935687 +0000 UTC m=+2.129628684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-25,}" Jul 12 00:07:45.155496 kubelet[2875]: I0712 00:07:45.154331 2875 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:45.156176 kubelet[2875]: I0712 00:07:45.156141 2875 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:07:45.157258 kubelet[2875]: I0712 00:07:45.157184 2875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:45.158233 kubelet[2875]: I0712 00:07:45.158148 2875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:45.158878 kubelet[2875]: I0712 00:07:45.158814 2875 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:45.162162 kubelet[2875]: I0712 00:07:45.162119 2875 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:45.168821 kubelet[2875]: I0712 00:07:45.168770 2875 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:07:45.171320 kubelet[2875]: W0712 00:07:45.171243 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:45.172771 kubelet[2875]: E0712 00:07:45.171605 2875 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-25\" not found" Jul 12 00:07:45.173005 kubelet[2875]: I0712 00:07:45.171633 2875 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:07:45.173496 kubelet[2875]: I0712 00:07:45.171739 2875 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:45.173496 kubelet[2875]: E0712 00:07:45.172191 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-25?timeout=10s\": dial tcp 172.31.18.25:6443: connect: connection refused" interval="200ms" Jul 12 00:07:45.173733 kubelet[2875]: E0712 00:07:45.172717 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:45.173838 kubelet[2875]: I0712 00:07:45.172639 2875 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:45.174136 kubelet[2875]: I0712 00:07:45.174091 2875 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:45.176009 kubelet[2875]: E0712 00:07:45.175969 2875 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:07:45.177198 kubelet[2875]: I0712 00:07:45.176365 2875 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:45.211890 kubelet[2875]: I0712 00:07:45.211480 2875 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:07:45.211890 kubelet[2875]: I0712 00:07:45.211520 2875 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:45.211890 kubelet[2875]: I0712 00:07:45.211554 2875 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:45.216923 kubelet[2875]: I0712 00:07:45.216681 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:45.218571 kubelet[2875]: I0712 00:07:45.217869 2875 policy_none.go:49] "None policy: Start" Jul 12 00:07:45.218571 kubelet[2875]: I0712 00:07:45.217914 2875 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:07:45.218571 kubelet[2875]: I0712 00:07:45.217939 2875 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:45.219516 kubelet[2875]: I0712 00:07:45.218981 2875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:45.219516 kubelet[2875]: I0712 00:07:45.219034 2875 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:07:45.219516 kubelet[2875]: I0712 00:07:45.219068 2875 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:07:45.219516 kubelet[2875]: I0712 00:07:45.219084 2875 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:07:45.219516 kubelet[2875]: E0712 00:07:45.219152 2875 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:45.227939 kubelet[2875]: W0712 00:07:45.227858 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:45.228207 kubelet[2875]: E0712 00:07:45.228167 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:45.237876 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:07:45.256175 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:07:45.263875 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:07:45.274171 kubelet[2875]: E0712 00:07:45.274118 2875 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-25\" not found" Jul 12 00:07:45.276666 kubelet[2875]: I0712 00:07:45.275754 2875 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:45.276666 kubelet[2875]: I0712 00:07:45.276059 2875 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:45.276666 kubelet[2875]: I0712 00:07:45.276079 2875 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:45.276666 kubelet[2875]: I0712 00:07:45.276436 2875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:45.281841 kubelet[2875]: E0712 00:07:45.281805 2875 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:07:45.282304 kubelet[2875]: E0712 00:07:45.282266 2875 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-25\" not found" Jul 12 00:07:45.339819 systemd[1]: Created slice kubepods-burstable-pod41717711fc286e3b5cc768732cc682cf.slice - libcontainer container kubepods-burstable-pod41717711fc286e3b5cc768732cc682cf.slice. Jul 12 00:07:45.356083 kubelet[2875]: E0712 00:07:45.355985 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:45.362216 systemd[1]: Created slice kubepods-burstable-pod5c05e2c05bcdba019dcb7a8b46f9ea24.slice - libcontainer container kubepods-burstable-pod5c05e2c05bcdba019dcb7a8b46f9ea24.slice. Jul 12 00:07:45.374665 kubelet[2875]: E0712 00:07:45.374620 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:45.375909 kubelet[2875]: E0712 00:07:45.375862 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-25?timeout=10s\": dial tcp 172.31.18.25:6443: connect: connection refused" interval="400ms" Jul 12 00:07:45.376150 kubelet[2875]: I0712 00:07:45.376090 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41717711fc286e3b5cc768732cc682cf-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-25\" (UID: \"41717711fc286e3b5cc768732cc682cf\") " pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:45.376218 kubelet[2875]: I0712 00:07:45.376151 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:45.376218 kubelet[2875]: I0712 00:07:45.376198 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4333b0c8eea339c4d62e835b2231f463-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-25\" (UID: \"4333b0c8eea339c4d62e835b2231f463\") " pod="kube-system/kube-scheduler-ip-172-31-18-25" Jul 12 00:07:45.376327 kubelet[2875]: I0712 00:07:45.376239 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41717711fc286e3b5cc768732cc682cf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-25\" (UID: \"41717711fc286e3b5cc768732cc682cf\") " pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:45.376327 kubelet[2875]: I0712 00:07:45.376283 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:45.376327 kubelet[2875]: I0712 00:07:45.376319 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:45.376512 kubelet[2875]: I0712 00:07:45.376352 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:45.376512 kubelet[2875]: I0712 00:07:45.376388 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:45.376512 kubelet[2875]: I0712 00:07:45.376423 2875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41717711fc286e3b5cc768732cc682cf-ca-certs\") pod \"kube-apiserver-ip-172-31-18-25\" (UID: \"41717711fc286e3b5cc768732cc682cf\") " pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:45.378983 kubelet[2875]: I0712 00:07:45.378945 2875 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-25" Jul 12 00:07:45.381237 kubelet[2875]: E0712 00:07:45.381124 2875 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.25:6443/api/v1/nodes\": dial tcp 172.31.18.25:6443: connect: connection refused" node="ip-172-31-18-25" Jul 12 00:07:45.381966 systemd[1]: Created slice kubepods-burstable-pod4333b0c8eea339c4d62e835b2231f463.slice - libcontainer container kubepods-burstable-pod4333b0c8eea339c4d62e835b2231f463.slice. Jul 12 00:07:45.387956 kubelet[2875]: E0712 00:07:45.387619 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:45.584760 kubelet[2875]: I0712 00:07:45.584305 2875 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-25" Jul 12 00:07:45.585196 kubelet[2875]: E0712 00:07:45.585148 2875 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.25:6443/api/v1/nodes\": dial tcp 172.31.18.25:6443: connect: connection refused" node="ip-172-31-18-25" Jul 12 00:07:45.658392 containerd[2015]: time="2025-07-12T00:07:45.658324713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-25,Uid:41717711fc286e3b5cc768732cc682cf,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:45.677216 containerd[2015]: time="2025-07-12T00:07:45.676825773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-25,Uid:5c05e2c05bcdba019dcb7a8b46f9ea24,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:45.693752 containerd[2015]: time="2025-07-12T00:07:45.693253353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-25,Uid:4333b0c8eea339c4d62e835b2231f463,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:45.776913 kubelet[2875]: E0712 00:07:45.776846 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-25?timeout=10s\": dial tcp 172.31.18.25:6443: connect: connection refused" interval="800ms" Jul 12 00:07:45.988553 kubelet[2875]: I0712 00:07:45.988309 2875 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-25" Jul 12 00:07:45.989507 kubelet[2875]: E0712 00:07:45.988970 2875 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.25:6443/api/v1/nodes\": dial tcp 172.31.18.25:6443: connect: connection refused" node="ip-172-31-18-25" Jul 12 00:07:46.086714 kubelet[2875]: W0712 00:07:46.086602 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-25&limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:46.086714 kubelet[2875]: E0712 00:07:46.086695 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-25&limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:46.114946 kubelet[2875]: W0712 00:07:46.114815 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:46.115126 kubelet[2875]: E0712 00:07:46.114953 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:46.181583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2026751732.mount: Deactivated successfully. Jul 12 00:07:46.201001 containerd[2015]: time="2025-07-12T00:07:46.200923340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:46.203083 containerd[2015]: time="2025-07-12T00:07:46.203006960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:46.205028 containerd[2015]: time="2025-07-12T00:07:46.204914912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 12 00:07:46.207142 containerd[2015]: time="2025-07-12T00:07:46.207013160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:46.209167 containerd[2015]: time="2025-07-12T00:07:46.209118380Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:46.212505 containerd[2015]: time="2025-07-12T00:07:46.212053592Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:46.216021 containerd[2015]: time="2025-07-12T00:07:46.215903216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:46.227364 containerd[2015]: time="2025-07-12T00:07:46.227268548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:46.230988 containerd[2015]: time="2025-07-12T00:07:46.230553020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.600263ms" Jul 12 00:07:46.233136 containerd[2015]: time="2025-07-12T00:07:46.233039348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.588035ms" Jul 12 00:07:46.235848 containerd[2015]: time="2025-07-12T00:07:46.235711652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.323635ms" Jul 12 00:07:46.464808 containerd[2015]: time="2025-07-12T00:07:46.464603757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:46.464808 containerd[2015]: time="2025-07-12T00:07:46.464757525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:46.465229 containerd[2015]: time="2025-07-12T00:07:46.464793693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:46.465229 containerd[2015]: time="2025-07-12T00:07:46.465020865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:46.473098 containerd[2015]: time="2025-07-12T00:07:46.472924089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:46.473098 containerd[2015]: time="2025-07-12T00:07:46.472998309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:46.473633 containerd[2015]: time="2025-07-12T00:07:46.473133249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:46.473633 containerd[2015]: time="2025-07-12T00:07:46.473186385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:46.476415 containerd[2015]: time="2025-07-12T00:07:46.476190801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:46.476415 containerd[2015]: time="2025-07-12T00:07:46.476246001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:46.476995 containerd[2015]: time="2025-07-12T00:07:46.473437653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:46.477592 containerd[2015]: time="2025-07-12T00:07:46.477406245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:46.530790 systemd[1]: Started cri-containerd-b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4.scope - libcontainer container b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4. Jul 12 00:07:46.550784 systemd[1]: Started cri-containerd-7f146c48427e99f75023d7511de0287adf0694c7ea632f5f40920bcdc31a99de.scope - libcontainer container 7f146c48427e99f75023d7511de0287adf0694c7ea632f5f40920bcdc31a99de. Jul 12 00:07:46.559861 systemd[1]: Started cri-containerd-a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597.scope - libcontainer container a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597. Jul 12 00:07:46.578352 kubelet[2875]: E0712 00:07:46.577987 2875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-25?timeout=10s\": dial tcp 172.31.18.25:6443: connect: connection refused" interval="1.6s" Jul 12 00:07:46.610123 kubelet[2875]: W0712 00:07:46.609989 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:46.610290 kubelet[2875]: E0712 00:07:46.610179 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:46.653755 containerd[2015]: time="2025-07-12T00:07:46.653610310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-25,Uid:5c05e2c05bcdba019dcb7a8b46f9ea24,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4\"" Jul 12 00:07:46.673251 kubelet[2875]: W0712 00:07:46.673100 2875 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.25:6443: connect: connection refused Jul 12 00:07:46.673251 kubelet[2875]: E0712 00:07:46.673180 2875 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:46.679843 containerd[2015]: time="2025-07-12T00:07:46.679768702Z" level=info msg="CreateContainer within sandbox \"b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:07:46.718027 containerd[2015]: time="2025-07-12T00:07:46.714971122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-25,Uid:41717711fc286e3b5cc768732cc682cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f146c48427e99f75023d7511de0287adf0694c7ea632f5f40920bcdc31a99de\"" Jul 12 00:07:46.726779 containerd[2015]: time="2025-07-12T00:07:46.726583355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-25,Uid:4333b0c8eea339c4d62e835b2231f463,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597\"" Jul 12 00:07:46.739079 containerd[2015]: time="2025-07-12T00:07:46.738953219Z" level=info msg="CreateContainer within sandbox \"a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:07:46.742499 containerd[2015]: time="2025-07-12T00:07:46.742085159Z" level=info msg="CreateContainer within sandbox \"7f146c48427e99f75023d7511de0287adf0694c7ea632f5f40920bcdc31a99de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:07:46.752699 update_engine[1994]: I20250712 00:07:46.752578 1994 update_attempter.cc:509] Updating boot flags... Jul 12 00:07:46.764332 containerd[2015]: time="2025-07-12T00:07:46.764128883Z" level=info msg="CreateContainer within sandbox \"b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53\"" Jul 12 00:07:46.767515 containerd[2015]: time="2025-07-12T00:07:46.766747391Z" level=info msg="StartContainer for \"978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53\"" Jul 12 00:07:46.796177 kubelet[2875]: I0712 00:07:46.795547 2875 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-25" Jul 12 00:07:46.796177 kubelet[2875]: E0712 00:07:46.796115 2875 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.25:6443/api/v1/nodes\": dial tcp 172.31.18.25:6443: connect: connection refused" node="ip-172-31-18-25" Jul 12 00:07:46.821756 containerd[2015]: time="2025-07-12T00:07:46.821691719Z" level=info msg="CreateContainer within sandbox \"7f146c48427e99f75023d7511de0287adf0694c7ea632f5f40920bcdc31a99de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c3e15db1bd49db703b307164011712dd007c8f5e17d5d3f62450f626863ea69a\"" Jul 12 00:07:46.823177 containerd[2015]: time="2025-07-12T00:07:46.823099727Z" level=info msg="StartContainer for \"c3e15db1bd49db703b307164011712dd007c8f5e17d5d3f62450f626863ea69a\"" Jul 12 00:07:46.835408 containerd[2015]: time="2025-07-12T00:07:46.835174835Z" level=info msg="CreateContainer within sandbox \"a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e\"" Jul 12 00:07:46.838621 containerd[2015]: time="2025-07-12T00:07:46.837945635Z" level=info msg="StartContainer for \"d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e\"" Jul 12 00:07:46.876093 systemd[1]: Started cri-containerd-978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53.scope - libcontainer container 978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53. Jul 12 00:07:46.913801 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3071) Jul 12 00:07:46.993954 systemd[1]: Started cri-containerd-c3e15db1bd49db703b307164011712dd007c8f5e17d5d3f62450f626863ea69a.scope - libcontainer container c3e15db1bd49db703b307164011712dd007c8f5e17d5d3f62450f626863ea69a. Jul 12 00:07:47.083957 containerd[2015]: time="2025-07-12T00:07:47.083628200Z" level=info msg="StartContainer for \"978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53\" returns successfully" Jul 12 00:07:47.084682 systemd[1]: Started cri-containerd-d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e.scope - libcontainer container d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e. Jul 12 00:07:47.163546 kubelet[2875]: E0712 00:07:47.162673 2875 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.25:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:47.251603 containerd[2015]: time="2025-07-12T00:07:47.245335497Z" level=info msg="StartContainer for \"c3e15db1bd49db703b307164011712dd007c8f5e17d5d3f62450f626863ea69a\" returns successfully" Jul 12 00:07:47.271043 kubelet[2875]: E0712 00:07:47.271003 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:47.314977 kubelet[2875]: E0712 00:07:47.313019 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:47.498229 containerd[2015]: time="2025-07-12T00:07:47.498093166Z" level=info msg="StartContainer for \"d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e\" returns successfully" Jul 12 00:07:48.346438 kubelet[2875]: E0712 00:07:48.346384 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:48.350472 kubelet[2875]: E0712 00:07:48.349507 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:48.398935 kubelet[2875]: I0712 00:07:48.398872 2875 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-25" Jul 12 00:07:49.348053 kubelet[2875]: E0712 00:07:49.347982 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:50.348783 kubelet[2875]: E0712 00:07:50.348726 2875 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:51.598225 kubelet[2875]: E0712 00:07:51.598148 2875 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-25\" not found" node="ip-172-31-18-25" Jul 12 00:07:51.605601 kubelet[2875]: I0712 00:07:51.605535 2875 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-25" Jul 12 00:07:51.605601 kubelet[2875]: E0712 00:07:51.605601 2875 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-25\": node \"ip-172-31-18-25\" not found" Jul 12 00:07:51.671480 kubelet[2875]: I0712 00:07:51.670971 2875 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:51.774728 kubelet[2875]: E0712 00:07:51.774673 2875 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-25\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:51.775128 kubelet[2875]: I0712 00:07:51.774967 2875 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-25" Jul 12 00:07:51.788760 kubelet[2875]: E0712 00:07:51.788695 2875 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-25\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-25" Jul 12 00:07:51.788760 kubelet[2875]: I0712 00:07:51.788752 2875 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:51.794203 kubelet[2875]: E0712 00:07:51.794146 2875 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-25\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:52.044491 kubelet[2875]: I0712 00:07:52.043625 2875 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:52.048232 kubelet[2875]: E0712 00:07:52.048161 2875 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-25\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:52.154274 kubelet[2875]: I0712 00:07:52.154194 2875 apiserver.go:52] "Watching apiserver" Jul 12 00:07:52.173414 kubelet[2875]: I0712 00:07:52.173341 2875 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:07:53.994408 systemd[1]: Reloading requested from client PID 3244 ('systemctl') (unit session-9.scope)... Jul 12 00:07:53.994462 systemd[1]: Reloading... Jul 12 00:07:54.251508 zram_generator::config[3290]: No configuration found. Jul 12 00:07:54.536785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:54.771118 systemd[1]: Reloading finished in 776 ms. Jul 12 00:07:54.859379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:54.875730 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:54.876226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:54.876340 systemd[1]: kubelet.service: Consumed 2.962s CPU time, 130.0M memory peak, 0B memory swap peak. Jul 12 00:07:54.886138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:55.271830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:55.282100 (kubelet)[3344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:55.407919 kubelet[3344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:55.407919 kubelet[3344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:55.407919 kubelet[3344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:55.408676 kubelet[3344]: I0712 00:07:55.407787 3344 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:55.434520 kubelet[3344]: I0712 00:07:55.434412 3344 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:07:55.434520 kubelet[3344]: I0712 00:07:55.434509 3344 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:55.435121 kubelet[3344]: I0712 00:07:55.435064 3344 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:07:55.439680 kubelet[3344]: I0712 00:07:55.437959 3344 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:07:55.443234 kubelet[3344]: I0712 00:07:55.443161 3344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:55.452522 kubelet[3344]: E0712 00:07:55.452051 3344 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:55.452522 kubelet[3344]: I0712 00:07:55.452123 3344 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:55.459428 kubelet[3344]: I0712 00:07:55.459331 3344 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:55.460020 kubelet[3344]: I0712 00:07:55.459936 3344 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:55.460813 kubelet[3344]: I0712 00:07:55.460008 3344 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:55.460813 kubelet[3344]: I0712 00:07:55.460373 3344 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:55.460813 kubelet[3344]: I0712 00:07:55.460400 3344 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:07:55.460813 kubelet[3344]: I0712 00:07:55.460527 3344 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:55.463106 kubelet[3344]: I0712 00:07:55.462565 3344 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:07:55.463279 kubelet[3344]: I0712 00:07:55.463135 3344 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:55.463279 kubelet[3344]: I0712 00:07:55.463177 3344 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:07:55.463279 kubelet[3344]: I0712 00:07:55.463201 3344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:55.471681 kubelet[3344]: I0712 00:07:55.471611 3344 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:55.474571 kubelet[3344]: I0712 00:07:55.474522 3344 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:55.480504 kubelet[3344]: I0712 00:07:55.476654 3344 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:07:55.480504 kubelet[3344]: I0712 00:07:55.476741 3344 server.go:1287] "Started kubelet" Jul 12 00:07:55.496520 kubelet[3344]: I0712 00:07:55.493851 3344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:55.504954 kubelet[3344]: I0712 00:07:55.504779 3344 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:55.507485 kubelet[3344]: I0712 00:07:55.507404 3344 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:07:55.509671 kubelet[3344]: I0712 00:07:55.509567 3344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:55.510216 kubelet[3344]: I0712 00:07:55.510157 3344 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:55.511751 kubelet[3344]: I0712 00:07:55.510662 3344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:55.515344 kubelet[3344]: I0712 00:07:55.514060 3344 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:07:55.515344 kubelet[3344]: E0712 00:07:55.514499 3344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-25\" not found" Jul 12 00:07:55.516551 kubelet[3344]: I0712 00:07:55.515688 3344 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:07:55.516551 kubelet[3344]: I0712 00:07:55.515934 3344 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:55.548239 kubelet[3344]: I0712 00:07:55.548071 3344 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:55.548239 kubelet[3344]: I0712 00:07:55.548118 3344 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:55.549140 kubelet[3344]: I0712 00:07:55.548306 3344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:55.600050 kubelet[3344]: I0712 00:07:55.599969 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:55.608913 kubelet[3344]: I0712 00:07:55.608844 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:55.609263 kubelet[3344]: I0712 00:07:55.609113 3344 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:07:55.609263 kubelet[3344]: I0712 00:07:55.609157 3344 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:07:55.609263 kubelet[3344]: I0712 00:07:55.609172 3344 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:07:55.609532 kubelet[3344]: E0712 00:07:55.609275 3344 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:55.709802 kubelet[3344]: E0712 00:07:55.709701 3344 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.782347 3344 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.782395 3344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.782439 3344 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.783235 3344 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.783267 3344 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.783306 3344 policy_none.go:49] "None policy: Start" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.783330 3344 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.783357 3344 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:55.785159 kubelet[3344]: I0712 00:07:55.783615 3344 state_mem.go:75] "Updated machine memory state" Jul 12 00:07:55.797234 kubelet[3344]: I0712 00:07:55.797143 3344 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:55.800995 kubelet[3344]: I0712 00:07:55.800576 3344 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:55.803300 kubelet[3344]: I0712 00:07:55.803200 3344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:55.808143 kubelet[3344]: I0712 00:07:55.808065 3344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:55.818998 kubelet[3344]: E0712 00:07:55.817359 3344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:07:55.911106 kubelet[3344]: I0712 00:07:55.911022 3344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-25" Jul 12 00:07:55.915431 kubelet[3344]: I0712 00:07:55.913499 3344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:55.915431 kubelet[3344]: I0712 00:07:55.913499 3344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:55.932145 kubelet[3344]: I0712 00:07:55.929151 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:55.932145 kubelet[3344]: I0712 00:07:55.929244 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:55.932145 kubelet[3344]: I0712 00:07:55.929293 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41717711fc286e3b5cc768732cc682cf-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-25\" (UID: \"41717711fc286e3b5cc768732cc682cf\") " pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:55.932145 kubelet[3344]: I0712 00:07:55.929341 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41717711fc286e3b5cc768732cc682cf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-25\" (UID: \"41717711fc286e3b5cc768732cc682cf\") " pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:55.932145 kubelet[3344]: I0712 00:07:55.929383 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:55.932550 kubelet[3344]: I0712 00:07:55.929419 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:55.932550 kubelet[3344]: I0712 00:07:55.929478 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c05e2c05bcdba019dcb7a8b46f9ea24-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-25\" (UID: \"5c05e2c05bcdba019dcb7a8b46f9ea24\") " pod="kube-system/kube-controller-manager-ip-172-31-18-25" Jul 12 00:07:55.932550 kubelet[3344]: I0712 00:07:55.929520 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4333b0c8eea339c4d62e835b2231f463-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-25\" (UID: \"4333b0c8eea339c4d62e835b2231f463\") " pod="kube-system/kube-scheduler-ip-172-31-18-25" Jul 12 00:07:55.932550 kubelet[3344]: I0712 00:07:55.929559 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41717711fc286e3b5cc768732cc682cf-ca-certs\") pod \"kube-apiserver-ip-172-31-18-25\" (UID: \"41717711fc286e3b5cc768732cc682cf\") " pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:55.935123 kubelet[3344]: I0712 00:07:55.935067 3344 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-25" Jul 12 00:07:55.955697 kubelet[3344]: I0712 00:07:55.955624 3344 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-25" Jul 12 00:07:55.955835 kubelet[3344]: I0712 00:07:55.955768 3344 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-25" Jul 12 00:07:56.492747 kubelet[3344]: I0712 00:07:56.492689 3344 apiserver.go:52] "Watching apiserver" Jul 12 00:07:56.516749 kubelet[3344]: I0712 00:07:56.516649 3344 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:07:56.707649 kubelet[3344]: I0712 00:07:56.707597 3344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:56.728394 kubelet[3344]: E0712 00:07:56.728320 3344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-25\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-25" Jul 12 00:07:56.834045 kubelet[3344]: I0712 00:07:56.833822 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-25" podStartSLOduration=1.833797737 podStartE2EDuration="1.833797737s" podCreationTimestamp="2025-07-12 00:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:56.798747285 +0000 UTC m=+1.505588997" watchObservedRunningTime="2025-07-12 00:07:56.833797737 +0000 UTC m=+1.540639437" Jul 12 00:07:56.897582 kubelet[3344]: I0712 00:07:56.896076 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-25" podStartSLOduration=1.896053245 podStartE2EDuration="1.896053245s" podCreationTimestamp="2025-07-12 00:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:56.848242473 +0000 UTC m=+1.555084161" watchObservedRunningTime="2025-07-12 00:07:56.896053245 +0000 UTC m=+1.602894945" Jul 12 00:07:56.963155 kubelet[3344]: I0712 00:07:56.963036 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-25" podStartSLOduration=1.963011649 podStartE2EDuration="1.963011649s" podCreationTimestamp="2025-07-12 00:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:56.899175837 +0000 UTC m=+1.606017537" watchObservedRunningTime="2025-07-12 00:07:56.963011649 +0000 UTC m=+1.669853349" Jul 12 00:07:58.607952 kubelet[3344]: I0712 00:07:58.607900 3344 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:07:58.608971 containerd[2015]: time="2025-07-12T00:07:58.608912614Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:07:58.609712 kubelet[3344]: I0712 00:07:58.609350 3344 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:07:59.538164 systemd[1]: Created slice kubepods-besteffort-podafd13df5_c30e_4d29_a94f_ac585741a8ce.slice - libcontainer container kubepods-besteffort-podafd13df5_c30e_4d29_a94f_ac585741a8ce.slice. Jul 12 00:07:59.555498 kubelet[3344]: I0712 00:07:59.555167 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd13df5-c30e-4d29-a94f-ac585741a8ce-xtables-lock\") pod \"kube-proxy-mzxc8\" (UID: \"afd13df5-c30e-4d29-a94f-ac585741a8ce\") " pod="kube-system/kube-proxy-mzxc8" Jul 12 00:07:59.555498 kubelet[3344]: I0712 00:07:59.555237 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afd13df5-c30e-4d29-a94f-ac585741a8ce-kube-proxy\") pod \"kube-proxy-mzxc8\" (UID: \"afd13df5-c30e-4d29-a94f-ac585741a8ce\") " pod="kube-system/kube-proxy-mzxc8" Jul 12 00:07:59.555498 kubelet[3344]: I0712 00:07:59.555278 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd13df5-c30e-4d29-a94f-ac585741a8ce-lib-modules\") pod \"kube-proxy-mzxc8\" (UID: \"afd13df5-c30e-4d29-a94f-ac585741a8ce\") " pod="kube-system/kube-proxy-mzxc8" Jul 12 00:07:59.555498 kubelet[3344]: I0712 00:07:59.555326 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzck\" (UniqueName: \"kubernetes.io/projected/afd13df5-c30e-4d29-a94f-ac585741a8ce-kube-api-access-pzzck\") pod \"kube-proxy-mzxc8\" (UID: \"afd13df5-c30e-4d29-a94f-ac585741a8ce\") " pod="kube-system/kube-proxy-mzxc8" Jul 12 00:07:59.683229 systemd[1]: Created slice kubepods-besteffort-pod9ad41972_0ab9_4f16_9db0_2adc1223d608.slice - libcontainer container kubepods-besteffort-pod9ad41972_0ab9_4f16_9db0_2adc1223d608.slice. Jul 12 00:07:59.757668 kubelet[3344]: I0712 00:07:59.757534 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ad41972-0ab9-4f16-9db0-2adc1223d608-var-lib-calico\") pod \"tigera-operator-747864d56d-vcjjv\" (UID: \"9ad41972-0ab9-4f16-9db0-2adc1223d608\") " pod="tigera-operator/tigera-operator-747864d56d-vcjjv" Jul 12 00:07:59.757668 kubelet[3344]: I0712 00:07:59.757600 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtcp\" (UniqueName: \"kubernetes.io/projected/9ad41972-0ab9-4f16-9db0-2adc1223d608-kube-api-access-pxtcp\") pod \"tigera-operator-747864d56d-vcjjv\" (UID: \"9ad41972-0ab9-4f16-9db0-2adc1223d608\") " pod="tigera-operator/tigera-operator-747864d56d-vcjjv" Jul 12 00:07:59.853683 containerd[2015]: time="2025-07-12T00:07:59.853537860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzxc8,Uid:afd13df5-c30e-4d29-a94f-ac585741a8ce,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:59.916243 containerd[2015]: time="2025-07-12T00:07:59.916001268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:59.916243 containerd[2015]: time="2025-07-12T00:07:59.916094244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:59.916243 containerd[2015]: time="2025-07-12T00:07:59.916121172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:59.916977 containerd[2015]: time="2025-07-12T00:07:59.916401084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:59.973803 systemd[1]: Started cri-containerd-53fdb524e5b877406c0662a946499bbfb8e16ff110666d553d8aa49e08a05535.scope - libcontainer container 53fdb524e5b877406c0662a946499bbfb8e16ff110666d553d8aa49e08a05535. Jul 12 00:07:59.990246 containerd[2015]: time="2025-07-12T00:07:59.990168696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-vcjjv,Uid:9ad41972-0ab9-4f16-9db0-2adc1223d608,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:08:00.030152 containerd[2015]: time="2025-07-12T00:08:00.029687229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzxc8,Uid:afd13df5-c30e-4d29-a94f-ac585741a8ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"53fdb524e5b877406c0662a946499bbfb8e16ff110666d553d8aa49e08a05535\"" Jul 12 00:08:00.038044 containerd[2015]: time="2025-07-12T00:08:00.037723713Z" level=info msg="CreateContainer within sandbox \"53fdb524e5b877406c0662a946499bbfb8e16ff110666d553d8aa49e08a05535\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:08:00.056854 containerd[2015]: time="2025-07-12T00:08:00.049089789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:00.056854 containerd[2015]: time="2025-07-12T00:08:00.049435905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:00.056854 containerd[2015]: time="2025-07-12T00:08:00.049544493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:00.056854 containerd[2015]: time="2025-07-12T00:08:00.049928253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:00.080816 containerd[2015]: time="2025-07-12T00:08:00.080702241Z" level=info msg="CreateContainer within sandbox \"53fdb524e5b877406c0662a946499bbfb8e16ff110666d553d8aa49e08a05535\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e55670ea534bf0b14b2f45bd70f3dd2fa1c99dd7504481ba6326f7cb31e7794\"" Jul 12 00:08:00.085798 containerd[2015]: time="2025-07-12T00:08:00.085302885Z" level=info msg="StartContainer for \"3e55670ea534bf0b14b2f45bd70f3dd2fa1c99dd7504481ba6326f7cb31e7794\"" Jul 12 00:08:00.097800 systemd[1]: Started cri-containerd-aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8.scope - libcontainer container aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8. Jul 12 00:08:00.159150 systemd[1]: Started cri-containerd-3e55670ea534bf0b14b2f45bd70f3dd2fa1c99dd7504481ba6326f7cb31e7794.scope - libcontainer container 3e55670ea534bf0b14b2f45bd70f3dd2fa1c99dd7504481ba6326f7cb31e7794. Jul 12 00:08:00.225417 containerd[2015]: time="2025-07-12T00:08:00.225326902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-vcjjv,Uid:9ad41972-0ab9-4f16-9db0-2adc1223d608,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8\"" Jul 12 00:08:00.229479 containerd[2015]: time="2025-07-12T00:08:00.229139674Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:08:00.252706 containerd[2015]: time="2025-07-12T00:08:00.252641770Z" level=info msg="StartContainer for \"3e55670ea534bf0b14b2f45bd70f3dd2fa1c99dd7504481ba6326f7cb31e7794\" returns successfully" Jul 12 00:08:00.708628 systemd[1]: run-containerd-runc-k8s.io-53fdb524e5b877406c0662a946499bbfb8e16ff110666d553d8aa49e08a05535-runc.TcdFOY.mount: Deactivated successfully. Jul 12 00:08:01.484552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611122502.mount: Deactivated successfully. Jul 12 00:08:02.297615 containerd[2015]: time="2025-07-12T00:08:02.297514080Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:02.299594 containerd[2015]: time="2025-07-12T00:08:02.299505336Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:08:02.301863 containerd[2015]: time="2025-07-12T00:08:02.301754172Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:02.307030 containerd[2015]: time="2025-07-12T00:08:02.306946836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:02.309362 containerd[2015]: time="2025-07-12T00:08:02.309101580Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.079880414s" Jul 12 00:08:02.309362 containerd[2015]: time="2025-07-12T00:08:02.309193452Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:08:02.315212 containerd[2015]: time="2025-07-12T00:08:02.314767776Z" level=info msg="CreateContainer within sandbox \"aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:08:02.354492 containerd[2015]: time="2025-07-12T00:08:02.354213876Z" level=info msg="CreateContainer within sandbox \"aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26\"" Jul 12 00:08:02.358677 containerd[2015]: time="2025-07-12T00:08:02.358333416Z" level=info msg="StartContainer for \"8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26\"" Jul 12 00:08:02.420685 systemd[1]: run-containerd-runc-k8s.io-8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26-runc.3TBg84.mount: Deactivated successfully. Jul 12 00:08:02.431830 systemd[1]: Started cri-containerd-8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26.scope - libcontainer container 8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26. Jul 12 00:08:02.489112 containerd[2015]: time="2025-07-12T00:08:02.488923381Z" level=info msg="StartContainer for \"8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26\" returns successfully" Jul 12 00:08:02.744307 kubelet[3344]: I0712 00:08:02.743817 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mzxc8" podStartSLOduration=3.74379353 podStartE2EDuration="3.74379353s" podCreationTimestamp="2025-07-12 00:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:00.774784788 +0000 UTC m=+5.481626476" watchObservedRunningTime="2025-07-12 00:08:02.74379353 +0000 UTC m=+7.450635230" Jul 12 00:08:05.123496 kubelet[3344]: I0712 00:08:05.123191 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-vcjjv" podStartSLOduration=4.03965676 podStartE2EDuration="6.123168506s" podCreationTimestamp="2025-07-12 00:07:59 +0000 UTC" firstStartedPulling="2025-07-12 00:08:00.228115798 +0000 UTC m=+4.934957486" lastFinishedPulling="2025-07-12 00:08:02.311627556 +0000 UTC m=+7.018469232" observedRunningTime="2025-07-12 00:08:02.745644278 +0000 UTC m=+7.452485966" watchObservedRunningTime="2025-07-12 00:08:05.123168506 +0000 UTC m=+9.830010182" Jul 12 00:08:09.385560 sudo[2347]: pam_unix(sudo:session): session closed for user root Jul 12 00:08:09.411722 sshd[2342]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:09.422034 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:08:09.424872 systemd[1]: sshd@8-172.31.18.25:22-139.178.89.65:60542.service: Deactivated successfully. Jul 12 00:08:09.437654 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:08:09.438012 systemd[1]: session-9.scope: Consumed 12.813s CPU time, 150.8M memory peak, 0B memory swap peak. Jul 12 00:08:09.440732 systemd-logind[1993]: Removed session 9. Jul 12 00:08:24.049785 systemd[1]: Created slice kubepods-besteffort-podf4f90b7d_b721_4dc8_a40f_9f8d5c52bdde.slice - libcontainer container kubepods-besteffort-podf4f90b7d_b721_4dc8_a40f_9f8d5c52bdde.slice. Jul 12 00:08:24.127198 kubelet[3344]: I0712 00:08:24.126983 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde-tigera-ca-bundle\") pod \"calico-typha-fdc848d69-hqs5h\" (UID: \"f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde\") " pod="calico-system/calico-typha-fdc848d69-hqs5h" Jul 12 00:08:24.127198 kubelet[3344]: I0712 00:08:24.127052 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde-typha-certs\") pod \"calico-typha-fdc848d69-hqs5h\" (UID: \"f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde\") " pod="calico-system/calico-typha-fdc848d69-hqs5h" Jul 12 00:08:24.127198 kubelet[3344]: I0712 00:08:24.127093 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm62k\" (UniqueName: \"kubernetes.io/projected/f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde-kube-api-access-gm62k\") pod \"calico-typha-fdc848d69-hqs5h\" (UID: \"f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde\") " pod="calico-system/calico-typha-fdc848d69-hqs5h" Jul 12 00:08:24.358734 containerd[2015]: time="2025-07-12T00:08:24.358669209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fdc848d69-hqs5h,Uid:f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:24.362204 kubelet[3344]: I0712 00:08:24.361775 3344 status_manager.go:890] "Failed to get status for pod" podUID="70533d21-eef5-40bb-b613-c7acefe4e521" pod="calico-system/calico-node-dczhx" err="pods \"calico-node-dczhx\" is forbidden: User \"system:node:ip-172-31-18-25\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-18-25' and this object" Jul 12 00:08:24.368591 systemd[1]: Created slice kubepods-besteffort-pod70533d21_eef5_40bb_b613_c7acefe4e521.slice - libcontainer container kubepods-besteffort-pod70533d21_eef5_40bb_b613_c7acefe4e521.slice. Jul 12 00:08:24.430317 kubelet[3344]: I0712 00:08:24.429495 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-cni-net-dir\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430317 kubelet[3344]: I0712 00:08:24.429588 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-policysync\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430317 kubelet[3344]: I0712 00:08:24.429632 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70533d21-eef5-40bb-b613-c7acefe4e521-tigera-ca-bundle\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430317 kubelet[3344]: I0712 00:08:24.429671 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-cni-bin-dir\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430317 kubelet[3344]: I0712 00:08:24.429711 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-var-lib-calico\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430911 kubelet[3344]: I0712 00:08:24.429751 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-xtables-lock\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430911 kubelet[3344]: I0712 00:08:24.429792 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-flexvol-driver-host\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430911 kubelet[3344]: I0712 00:08:24.429837 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-cni-log-dir\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430911 kubelet[3344]: I0712 00:08:24.429900 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-lib-modules\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.430911 kubelet[3344]: I0712 00:08:24.429938 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/70533d21-eef5-40bb-b613-c7acefe4e521-var-run-calico\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.431170 kubelet[3344]: I0712 00:08:24.429978 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfr9s\" (UniqueName: \"kubernetes.io/projected/70533d21-eef5-40bb-b613-c7acefe4e521-kube-api-access-nfr9s\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.431170 kubelet[3344]: I0712 00:08:24.430020 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/70533d21-eef5-40bb-b613-c7acefe4e521-node-certs\") pod \"calico-node-dczhx\" (UID: \"70533d21-eef5-40bb-b613-c7acefe4e521\") " pod="calico-system/calico-node-dczhx" Jul 12 00:08:24.438508 containerd[2015]: time="2025-07-12T00:08:24.436867402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:24.438508 containerd[2015]: time="2025-07-12T00:08:24.436967314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:24.438508 containerd[2015]: time="2025-07-12T00:08:24.437043682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:24.439687 containerd[2015]: time="2025-07-12T00:08:24.439513222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:24.507373 systemd[1]: Started cri-containerd-93e0381c9dff5f62180eb35d93e71d877f3a2d06f22b69c1b49ebc9885af9174.scope - libcontainer container 93e0381c9dff5f62180eb35d93e71d877f3a2d06f22b69c1b49ebc9885af9174. Jul 12 00:08:24.536217 kubelet[3344]: E0712 00:08:24.535047 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.536217 kubelet[3344]: W0712 00:08:24.535112 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.536217 kubelet[3344]: E0712 00:08:24.535199 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.536217 kubelet[3344]: E0712 00:08:24.535788 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.536217 kubelet[3344]: W0712 00:08:24.535839 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.536217 kubelet[3344]: E0712 00:08:24.535870 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.538140 kubelet[3344]: E0712 00:08:24.537755 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.538140 kubelet[3344]: W0712 00:08:24.537798 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.538760 kubelet[3344]: E0712 00:08:24.538608 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.540484 kubelet[3344]: E0712 00:08:24.539572 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.540484 kubelet[3344]: W0712 00:08:24.539630 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.540887 kubelet[3344]: E0712 00:08:24.540834 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.542690 kubelet[3344]: E0712 00:08:24.542639 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.542690 kubelet[3344]: W0712 00:08:24.542677 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.543028 kubelet[3344]: E0712 00:08:24.542875 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.545060 kubelet[3344]: E0712 00:08:24.544717 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.545060 kubelet[3344]: W0712 00:08:24.544750 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.545060 kubelet[3344]: E0712 00:08:24.544964 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.545797 kubelet[3344]: E0712 00:08:24.545660 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.545797 kubelet[3344]: W0712 00:08:24.545690 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.545797 kubelet[3344]: E0712 00:08:24.545752 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.546609 kubelet[3344]: E0712 00:08:24.546438 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.546609 kubelet[3344]: W0712 00:08:24.546498 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.547020 kubelet[3344]: E0712 00:08:24.546835 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.548165 kubelet[3344]: E0712 00:08:24.547535 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.548606 kubelet[3344]: W0712 00:08:24.548326 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.548606 kubelet[3344]: E0712 00:08:24.548544 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.549350 kubelet[3344]: E0712 00:08:24.549207 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.549350 kubelet[3344]: W0712 00:08:24.549239 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.549845 kubelet[3344]: E0712 00:08:24.549635 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.550681 kubelet[3344]: E0712 00:08:24.550372 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.550681 kubelet[3344]: W0712 00:08:24.550404 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.550910 kubelet[3344]: E0712 00:08:24.550877 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.551311 kubelet[3344]: E0712 00:08:24.551284 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.551760 kubelet[3344]: W0712 00:08:24.551496 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.551760 kubelet[3344]: E0712 00:08:24.551572 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.552183 kubelet[3344]: E0712 00:08:24.552158 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.552289 kubelet[3344]: W0712 00:08:24.552265 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.552961 kubelet[3344]: E0712 00:08:24.552698 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.552961 kubelet[3344]: W0712 00:08:24.552723 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.552961 kubelet[3344]: E0712 00:08:24.552754 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.553549 kubelet[3344]: E0712 00:08:24.553519 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.553764 kubelet[3344]: W0712 00:08:24.553644 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.553764 kubelet[3344]: E0712 00:08:24.553679 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.553764 kubelet[3344]: E0712 00:08:24.553727 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.557561 kubelet[3344]: E0712 00:08:24.557382 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.557561 kubelet[3344]: W0712 00:08:24.557413 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.557561 kubelet[3344]: E0712 00:08:24.557441 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.575108 kubelet[3344]: E0712 00:08:24.574959 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.575108 kubelet[3344]: W0712 00:08:24.575003 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.575108 kubelet[3344]: E0712 00:08:24.575036 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.583848 kubelet[3344]: E0712 00:08:24.583383 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:24.604031 kubelet[3344]: E0712 00:08:24.603964 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.604438 kubelet[3344]: W0712 00:08:24.604380 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.604892 kubelet[3344]: E0712 00:08:24.604636 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.606265 kubelet[3344]: E0712 00:08:24.605845 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.606265 kubelet[3344]: W0712 00:08:24.605883 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.606265 kubelet[3344]: E0712 00:08:24.605961 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.608076 kubelet[3344]: E0712 00:08:24.607934 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.608076 kubelet[3344]: W0712 00:08:24.607994 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.608076 kubelet[3344]: E0712 00:08:24.608029 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.610159 kubelet[3344]: E0712 00:08:24.609089 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.610159 kubelet[3344]: W0712 00:08:24.609126 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.610159 kubelet[3344]: E0712 00:08:24.609186 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.613663 kubelet[3344]: E0712 00:08:24.613597 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.613663 kubelet[3344]: W0712 00:08:24.613642 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.613883 kubelet[3344]: E0712 00:08:24.613675 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.617014 kubelet[3344]: E0712 00:08:24.614854 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.617014 kubelet[3344]: W0712 00:08:24.614899 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.617014 kubelet[3344]: E0712 00:08:24.614933 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.620279 kubelet[3344]: E0712 00:08:24.618918 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.620279 kubelet[3344]: W0712 00:08:24.618960 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.620279 kubelet[3344]: E0712 00:08:24.618994 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.621643 kubelet[3344]: E0712 00:08:24.620675 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.621643 kubelet[3344]: W0712 00:08:24.620855 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.621643 kubelet[3344]: E0712 00:08:24.620894 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.623745 kubelet[3344]: E0712 00:08:24.623227 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.623745 kubelet[3344]: W0712 00:08:24.623268 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.623745 kubelet[3344]: E0712 00:08:24.623299 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.626126 kubelet[3344]: E0712 00:08:24.625790 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.626126 kubelet[3344]: W0712 00:08:24.625832 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.626126 kubelet[3344]: E0712 00:08:24.625871 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.627151 kubelet[3344]: E0712 00:08:24.626982 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.627151 kubelet[3344]: W0712 00:08:24.627024 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.627151 kubelet[3344]: E0712 00:08:24.627090 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.628171 kubelet[3344]: E0712 00:08:24.628049 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.628171 kubelet[3344]: W0712 00:08:24.628105 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.628581 kubelet[3344]: E0712 00:08:24.628136 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.631911 kubelet[3344]: E0712 00:08:24.631838 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.631911 kubelet[3344]: W0712 00:08:24.631883 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.631911 kubelet[3344]: E0712 00:08:24.631920 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.633265 kubelet[3344]: E0712 00:08:24.633213 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.633265 kubelet[3344]: W0712 00:08:24.633250 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.633613 kubelet[3344]: E0712 00:08:24.633283 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.635296 kubelet[3344]: E0712 00:08:24.635235 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.635478 kubelet[3344]: W0712 00:08:24.635276 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.635478 kubelet[3344]: E0712 00:08:24.635344 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.636832 kubelet[3344]: E0712 00:08:24.636779 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.636832 kubelet[3344]: W0712 00:08:24.636820 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.637163 kubelet[3344]: E0712 00:08:24.636853 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.638869 kubelet[3344]: E0712 00:08:24.638818 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.638869 kubelet[3344]: W0712 00:08:24.638857 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.639067 kubelet[3344]: E0712 00:08:24.638890 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.639309 kubelet[3344]: E0712 00:08:24.639267 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.639309 kubelet[3344]: W0712 00:08:24.639298 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.639549 kubelet[3344]: E0712 00:08:24.639324 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.640715 kubelet[3344]: E0712 00:08:24.640664 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.640715 kubelet[3344]: W0712 00:08:24.640705 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.640905 kubelet[3344]: E0712 00:08:24.640739 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.641206 kubelet[3344]: E0712 00:08:24.641170 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.641206 kubelet[3344]: W0712 00:08:24.641200 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.641350 kubelet[3344]: E0712 00:08:24.641226 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.642640 kubelet[3344]: E0712 00:08:24.642592 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.642640 kubelet[3344]: W0712 00:08:24.642633 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.642946 kubelet[3344]: E0712 00:08:24.642666 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.642946 kubelet[3344]: I0712 00:08:24.642724 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/955d28b9-9d88-48e7-9db2-62374412839c-registration-dir\") pod \"csi-node-driver-748mg\" (UID: \"955d28b9-9d88-48e7-9db2-62374412839c\") " pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:24.644904 kubelet[3344]: E0712 00:08:24.644846 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.644904 kubelet[3344]: W0712 00:08:24.644894 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.646547 kubelet[3344]: E0712 00:08:24.644946 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.646547 kubelet[3344]: I0712 00:08:24.645014 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/955d28b9-9d88-48e7-9db2-62374412839c-socket-dir\") pod \"csi-node-driver-748mg\" (UID: \"955d28b9-9d88-48e7-9db2-62374412839c\") " pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:24.646997 kubelet[3344]: E0712 00:08:24.646947 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.646997 kubelet[3344]: W0712 00:08:24.646993 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.647418 kubelet[3344]: E0712 00:08:24.647162 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.647418 kubelet[3344]: I0712 00:08:24.647235 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/955d28b9-9d88-48e7-9db2-62374412839c-kubelet-dir\") pod \"csi-node-driver-748mg\" (UID: \"955d28b9-9d88-48e7-9db2-62374412839c\") " pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:24.648865 kubelet[3344]: E0712 00:08:24.648811 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.648865 kubelet[3344]: W0712 00:08:24.648853 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.649232 kubelet[3344]: E0712 00:08:24.649057 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.650731 kubelet[3344]: E0712 00:08:24.650677 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.650731 kubelet[3344]: W0712 00:08:24.650719 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.651655 kubelet[3344]: E0712 00:08:24.650886 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.651655 kubelet[3344]: E0712 00:08:24.651163 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.651655 kubelet[3344]: W0712 00:08:24.651183 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.651655 kubelet[3344]: E0712 00:08:24.651248 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.651655 kubelet[3344]: I0712 00:08:24.651589 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgttn\" (UniqueName: \"kubernetes.io/projected/955d28b9-9d88-48e7-9db2-62374412839c-kube-api-access-mgttn\") pod \"csi-node-driver-748mg\" (UID: \"955d28b9-9d88-48e7-9db2-62374412839c\") " pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:24.652849 kubelet[3344]: E0712 00:08:24.652797 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.652849 kubelet[3344]: W0712 00:08:24.652836 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.653175 kubelet[3344]: E0712 00:08:24.653004 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.654824 kubelet[3344]: E0712 00:08:24.654772 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.654824 kubelet[3344]: W0712 00:08:24.654810 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.655126 kubelet[3344]: E0712 00:08:24.654843 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.656204 kubelet[3344]: E0712 00:08:24.656145 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.656204 kubelet[3344]: W0712 00:08:24.656187 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.656204 kubelet[3344]: E0712 00:08:24.656233 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.657849 kubelet[3344]: E0712 00:08:24.657799 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.657849 kubelet[3344]: W0712 00:08:24.657840 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.658324 kubelet[3344]: E0712 00:08:24.657873 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.658629 kubelet[3344]: E0712 00:08:24.658584 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.658629 kubelet[3344]: W0712 00:08:24.658621 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.658849 kubelet[3344]: E0712 00:08:24.658653 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.661023 kubelet[3344]: E0712 00:08:24.660643 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.661023 kubelet[3344]: W0712 00:08:24.660682 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.661023 kubelet[3344]: E0712 00:08:24.660715 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.663316 kubelet[3344]: E0712 00:08:24.661611 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.663316 kubelet[3344]: W0712 00:08:24.661638 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.663316 kubelet[3344]: E0712 00:08:24.661669 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.663316 kubelet[3344]: I0712 00:08:24.661712 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/955d28b9-9d88-48e7-9db2-62374412839c-varrun\") pod \"csi-node-driver-748mg\" (UID: \"955d28b9-9d88-48e7-9db2-62374412839c\") " pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:24.663863 kubelet[3344]: E0712 00:08:24.663814 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.663863 kubelet[3344]: W0712 00:08:24.663857 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.664054 kubelet[3344]: E0712 00:08:24.663891 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.664301 kubelet[3344]: E0712 00:08:24.664263 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.664301 kubelet[3344]: W0712 00:08:24.664293 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.664467 kubelet[3344]: E0712 00:08:24.664319 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.680503 containerd[2015]: time="2025-07-12T00:08:24.679476911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dczhx,Uid:70533d21-eef5-40bb-b613-c7acefe4e521,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:24.752836 containerd[2015]: time="2025-07-12T00:08:24.752075123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:24.753307 containerd[2015]: time="2025-07-12T00:08:24.753029279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:24.753931 containerd[2015]: time="2025-07-12T00:08:24.753500903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:24.753931 containerd[2015]: time="2025-07-12T00:08:24.753696227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:24.765246 kubelet[3344]: E0712 00:08:24.764103 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.765246 kubelet[3344]: W0712 00:08:24.764174 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.765246 kubelet[3344]: E0712 00:08:24.764205 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.765916 kubelet[3344]: E0712 00:08:24.765883 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.766116 kubelet[3344]: W0712 00:08:24.766086 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.766595 kubelet[3344]: E0712 00:08:24.766334 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.769090 kubelet[3344]: E0712 00:08:24.768940 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.769090 kubelet[3344]: W0712 00:08:24.768974 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.769090 kubelet[3344]: E0712 00:08:24.769035 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.770906 kubelet[3344]: E0712 00:08:24.770729 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.770906 kubelet[3344]: W0712 00:08:24.770894 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.772303 kubelet[3344]: E0712 00:08:24.771397 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.772303 kubelet[3344]: W0712 00:08:24.771415 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.772303 kubelet[3344]: E0712 00:08:24.771442 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.772303 kubelet[3344]: E0712 00:08:24.771519 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.772303 kubelet[3344]: E0712 00:08:24.771826 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.772303 kubelet[3344]: W0712 00:08:24.771845 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.772303 kubelet[3344]: E0712 00:08:24.771885 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.773050 kubelet[3344]: E0712 00:08:24.772784 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.773050 kubelet[3344]: W0712 00:08:24.772808 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.773050 kubelet[3344]: E0712 00:08:24.772847 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.774503 kubelet[3344]: E0712 00:08:24.773814 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.774503 kubelet[3344]: W0712 00:08:24.773857 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.774503 kubelet[3344]: E0712 00:08:24.773944 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.775181 kubelet[3344]: E0712 00:08:24.775132 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.775181 kubelet[3344]: W0712 00:08:24.775171 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.775325 kubelet[3344]: E0712 00:08:24.775205 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.778759 kubelet[3344]: E0712 00:08:24.778463 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.778759 kubelet[3344]: W0712 00:08:24.778507 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.778759 kubelet[3344]: E0712 00:08:24.778540 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.781738 kubelet[3344]: E0712 00:08:24.781646 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.781738 kubelet[3344]: W0712 00:08:24.781686 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.782258 kubelet[3344]: E0712 00:08:24.782043 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.784275 kubelet[3344]: E0712 00:08:24.784015 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.784275 kubelet[3344]: W0712 00:08:24.784055 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.784275 kubelet[3344]: E0712 00:08:24.784194 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.787262 kubelet[3344]: E0712 00:08:24.786852 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.787262 kubelet[3344]: W0712 00:08:24.786894 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.787262 kubelet[3344]: E0712 00:08:24.787110 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.791987 kubelet[3344]: E0712 00:08:24.791917 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.791987 kubelet[3344]: W0712 00:08:24.791957 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.791987 kubelet[3344]: E0712 00:08:24.792025 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.794426 kubelet[3344]: E0712 00:08:24.794370 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.794426 kubelet[3344]: W0712 00:08:24.794412 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.794805 kubelet[3344]: E0712 00:08:24.794622 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.796420 kubelet[3344]: E0712 00:08:24.796237 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.796420 kubelet[3344]: W0712 00:08:24.796278 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.797241 kubelet[3344]: E0712 00:08:24.797072 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.799873 kubelet[3344]: E0712 00:08:24.799690 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.799873 kubelet[3344]: W0712 00:08:24.799732 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.800513 kubelet[3344]: E0712 00:08:24.800111 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.807189 kubelet[3344]: E0712 00:08:24.803599 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.807189 kubelet[3344]: W0712 00:08:24.805865 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.807189 kubelet[3344]: E0712 00:08:24.806440 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.808097 kubelet[3344]: E0712 00:08:24.808052 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.808097 kubelet[3344]: W0712 00:08:24.808091 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.808506 kubelet[3344]: E0712 00:08:24.808171 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.809764 kubelet[3344]: E0712 00:08:24.809681 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.810176 kubelet[3344]: W0712 00:08:24.809920 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.810176 kubelet[3344]: E0712 00:08:24.810018 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.810528 kubelet[3344]: E0712 00:08:24.810476 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.810528 kubelet[3344]: W0712 00:08:24.810508 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.810528 kubelet[3344]: E0712 00:08:24.810652 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.810985 kubelet[3344]: E0712 00:08:24.810879 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.810985 kubelet[3344]: W0712 00:08:24.810901 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.810985 kubelet[3344]: E0712 00:08:24.810941 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.812894 kubelet[3344]: E0712 00:08:24.811815 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.812894 kubelet[3344]: W0712 00:08:24.811847 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.812894 kubelet[3344]: E0712 00:08:24.811891 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.814967 kubelet[3344]: E0712 00:08:24.814283 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.814967 kubelet[3344]: W0712 00:08:24.814319 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.814967 kubelet[3344]: E0712 00:08:24.814352 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.817567 kubelet[3344]: E0712 00:08:24.817249 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.817567 kubelet[3344]: W0712 00:08:24.817284 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.817567 kubelet[3344]: E0712 00:08:24.817317 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:24.833793 systemd[1]: Started cri-containerd-d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d.scope - libcontainer container d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d. Jul 12 00:08:24.848341 kubelet[3344]: E0712 00:08:24.848302 3344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:08:24.848756 kubelet[3344]: W0712 00:08:24.848559 3344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:08:24.848920 kubelet[3344]: E0712 00:08:24.848893 3344 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:08:25.011504 containerd[2015]: time="2025-07-12T00:08:25.008947209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dczhx,Uid:70533d21-eef5-40bb-b613-c7acefe4e521,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\"" Jul 12 00:08:25.020074 containerd[2015]: time="2025-07-12T00:08:25.018964125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:08:25.106757 containerd[2015]: time="2025-07-12T00:08:25.105888717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fdc848d69-hqs5h,Uid:f4f90b7d-b721-4dc8-a40f-9f8d5c52bdde,Namespace:calico-system,Attempt:0,} returns sandbox id \"93e0381c9dff5f62180eb35d93e71d877f3a2d06f22b69c1b49ebc9885af9174\"" Jul 12 00:08:26.257279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453850594.mount: Deactivated successfully. Jul 12 00:08:26.453533 containerd[2015]: time="2025-07-12T00:08:26.452886732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:26.455058 containerd[2015]: time="2025-07-12T00:08:26.454995456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 12 00:08:26.457390 containerd[2015]: time="2025-07-12T00:08:26.457312524Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:26.463721 containerd[2015]: time="2025-07-12T00:08:26.463614648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:26.465131 containerd[2015]: time="2025-07-12T00:08:26.464579376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.444674319s" Jul 12 00:08:26.465131 containerd[2015]: time="2025-07-12T00:08:26.464644056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:08:26.467290 containerd[2015]: time="2025-07-12T00:08:26.466939224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:08:26.470714 containerd[2015]: time="2025-07-12T00:08:26.470643132Z" level=info msg="CreateContainer within sandbox \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:08:26.501176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437898965.mount: Deactivated successfully. Jul 12 00:08:26.508607 containerd[2015]: time="2025-07-12T00:08:26.508386840Z" level=info msg="CreateContainer within sandbox \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655\"" Jul 12 00:08:26.510744 containerd[2015]: time="2025-07-12T00:08:26.509818704Z" level=info msg="StartContainer for \"87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655\"" Jul 12 00:08:26.583781 systemd[1]: Started cri-containerd-87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655.scope - libcontainer container 87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655. Jul 12 00:08:26.612210 kubelet[3344]: E0712 00:08:26.612112 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:26.677936 containerd[2015]: time="2025-07-12T00:08:26.677404273Z" level=info msg="StartContainer for \"87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655\" returns successfully" Jul 12 00:08:26.720639 systemd[1]: cri-containerd-87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655.scope: Deactivated successfully. Jul 12 00:08:26.916864 containerd[2015]: time="2025-07-12T00:08:26.916778234Z" level=info msg="shim disconnected" id=87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655 namespace=k8s.io Jul 12 00:08:26.916864 containerd[2015]: time="2025-07-12T00:08:26.916854818Z" level=warning msg="cleaning up after shim disconnected" id=87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655 namespace=k8s.io Jul 12 00:08:26.917472 containerd[2015]: time="2025-07-12T00:08:26.916876814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:27.253879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f03130f9ded37efa22ad770f8652f32ad2579af09088aa62a48ca158a88655-rootfs.mount: Deactivated successfully. Jul 12 00:08:28.612952 kubelet[3344]: E0712 00:08:28.609869 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:28.885708 containerd[2015]: time="2025-07-12T00:08:28.884836456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:28.887478 containerd[2015]: time="2025-07-12T00:08:28.887367316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 12 00:08:28.890506 containerd[2015]: time="2025-07-12T00:08:28.890360956Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:28.900547 containerd[2015]: time="2025-07-12T00:08:28.897075028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:28.900739 containerd[2015]: time="2025-07-12T00:08:28.898413784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.431406064s" Jul 12 00:08:28.900893 containerd[2015]: time="2025-07-12T00:08:28.900849940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:08:28.905305 containerd[2015]: time="2025-07-12T00:08:28.905237704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:08:28.943553 containerd[2015]: time="2025-07-12T00:08:28.940918288Z" level=info msg="CreateContainer within sandbox \"93e0381c9dff5f62180eb35d93e71d877f3a2d06f22b69c1b49ebc9885af9174\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:08:28.996640 containerd[2015]: time="2025-07-12T00:08:28.995065480Z" level=info msg="CreateContainer within sandbox \"93e0381c9dff5f62180eb35d93e71d877f3a2d06f22b69c1b49ebc9885af9174\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"04c857cde514c61e142e9162a72c924af1b2772afb5690162b4bd97dc7ac1b19\"" Jul 12 00:08:28.995435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163573622.mount: Deactivated successfully. Jul 12 00:08:29.000509 containerd[2015]: time="2025-07-12T00:08:28.997882624Z" level=info msg="StartContainer for \"04c857cde514c61e142e9162a72c924af1b2772afb5690162b4bd97dc7ac1b19\"" Jul 12 00:08:29.071652 systemd[1]: Started cri-containerd-04c857cde514c61e142e9162a72c924af1b2772afb5690162b4bd97dc7ac1b19.scope - libcontainer container 04c857cde514c61e142e9162a72c924af1b2772afb5690162b4bd97dc7ac1b19. Jul 12 00:08:29.348162 containerd[2015]: time="2025-07-12T00:08:29.348092462Z" level=info msg="StartContainer for \"04c857cde514c61e142e9162a72c924af1b2772afb5690162b4bd97dc7ac1b19\" returns successfully" Jul 12 00:08:29.898168 kubelet[3344]: I0712 00:08:29.897504 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fdc848d69-hqs5h" podStartSLOduration=3.10971955 podStartE2EDuration="6.897442241s" podCreationTimestamp="2025-07-12 00:08:23 +0000 UTC" firstStartedPulling="2025-07-12 00:08:25.116547165 +0000 UTC m=+29.823388853" lastFinishedPulling="2025-07-12 00:08:28.904269868 +0000 UTC m=+33.611111544" observedRunningTime="2025-07-12 00:08:29.897177437 +0000 UTC m=+34.604019137" watchObservedRunningTime="2025-07-12 00:08:29.897442241 +0000 UTC m=+34.604283941" Jul 12 00:08:30.610132 kubelet[3344]: E0712 00:08:30.610074 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:30.858136 kubelet[3344]: I0712 00:08:30.858089 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:31.765593 containerd[2015]: time="2025-07-12T00:08:31.765467082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:31.768497 containerd[2015]: time="2025-07-12T00:08:31.768399306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:08:31.771090 containerd[2015]: time="2025-07-12T00:08:31.771010986Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:31.777261 containerd[2015]: time="2025-07-12T00:08:31.777165762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:31.778817 containerd[2015]: time="2025-07-12T00:08:31.778744494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.87342351s" Jul 12 00:08:31.779121 containerd[2015]: time="2025-07-12T00:08:31.778984170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:08:31.783729 containerd[2015]: time="2025-07-12T00:08:31.783549918Z" level=info msg="CreateContainer within sandbox \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:08:31.814405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4279082894.mount: Deactivated successfully. Jul 12 00:08:31.818414 containerd[2015]: time="2025-07-12T00:08:31.818314579Z" level=info msg="CreateContainer within sandbox \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a\"" Jul 12 00:08:31.819908 containerd[2015]: time="2025-07-12T00:08:31.819840415Z" level=info msg="StartContainer for \"d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a\"" Jul 12 00:08:31.884876 systemd[1]: run-containerd-runc-k8s.io-d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a-runc.RrMQUV.mount: Deactivated successfully. Jul 12 00:08:31.894071 systemd[1]: Started cri-containerd-d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a.scope - libcontainer container d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a. Jul 12 00:08:31.951895 containerd[2015]: time="2025-07-12T00:08:31.951284731Z" level=info msg="StartContainer for \"d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a\" returns successfully" Jul 12 00:08:32.610079 kubelet[3344]: E0712 00:08:32.609993 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:32.694418 kubelet[3344]: I0712 00:08:32.693828 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:33.138737 containerd[2015]: time="2025-07-12T00:08:33.138660437Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:08:33.144308 systemd[1]: cri-containerd-d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a.scope: Deactivated successfully. Jul 12 00:08:33.190305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a-rootfs.mount: Deactivated successfully. Jul 12 00:08:33.211561 kubelet[3344]: I0712 00:08:33.211034 3344 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:08:33.297384 systemd[1]: Created slice kubepods-burstable-pod0725903c_a273_456a_a2eb_c24032ec4754.slice - libcontainer container kubepods-burstable-pod0725903c_a273_456a_a2eb_c24032ec4754.slice. Jul 12 00:08:33.321805 systemd[1]: Created slice kubepods-besteffort-pod54897f3e_0cc0_4a5e_8247_2e1792f1abf8.slice - libcontainer container kubepods-besteffort-pod54897f3e_0cc0_4a5e_8247_2e1792f1abf8.slice. Jul 12 00:08:33.355099 systemd[1]: Created slice kubepods-besteffort-pod32232079_fc02_426e_a296_066d8c1e6445.slice - libcontainer container kubepods-besteffort-pod32232079_fc02_426e_a296_066d8c1e6445.slice. Jul 12 00:08:33.373870 kubelet[3344]: I0712 00:08:33.363795 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32232079-fc02-426e-a296-066d8c1e6445-tigera-ca-bundle\") pod \"calico-kube-controllers-596f7dcbbd-zlhwz\" (UID: \"32232079-fc02-426e-a296-066d8c1e6445\") " pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" Jul 12 00:08:33.373870 kubelet[3344]: I0712 00:08:33.363873 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv75z\" (UniqueName: \"kubernetes.io/projected/32232079-fc02-426e-a296-066d8c1e6445-kube-api-access-jv75z\") pod \"calico-kube-controllers-596f7dcbbd-zlhwz\" (UID: \"32232079-fc02-426e-a296-066d8c1e6445\") " pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" Jul 12 00:08:33.373870 kubelet[3344]: I0712 00:08:33.363936 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2nj\" (UniqueName: \"kubernetes.io/projected/0725903c-a273-456a-a2eb-c24032ec4754-kube-api-access-bc2nj\") pod \"coredns-668d6bf9bc-z5c86\" (UID: \"0725903c-a273-456a-a2eb-c24032ec4754\") " pod="kube-system/coredns-668d6bf9bc-z5c86" Jul 12 00:08:33.373870 kubelet[3344]: I0712 00:08:33.363987 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g52w\" (UniqueName: \"kubernetes.io/projected/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-kube-api-access-6g52w\") pod \"whisker-98d66569b-k8l5m\" (UID: \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\") " pod="calico-system/whisker-98d66569b-k8l5m" Jul 12 00:08:33.373870 kubelet[3344]: I0712 00:08:33.364032 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-ca-bundle\") pod \"whisker-98d66569b-k8l5m\" (UID: \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\") " pod="calico-system/whisker-98d66569b-k8l5m" Jul 12 00:08:33.374906 kubelet[3344]: I0712 00:08:33.364082 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0725903c-a273-456a-a2eb-c24032ec4754-config-volume\") pod \"coredns-668d6bf9bc-z5c86\" (UID: \"0725903c-a273-456a-a2eb-c24032ec4754\") " pod="kube-system/coredns-668d6bf9bc-z5c86" Jul 12 00:08:33.374906 kubelet[3344]: I0712 00:08:33.364132 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-backend-key-pair\") pod \"whisker-98d66569b-k8l5m\" (UID: \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\") " pod="calico-system/whisker-98d66569b-k8l5m" Jul 12 00:08:33.379653 systemd[1]: Created slice kubepods-besteffort-podf1aedc09_28b8_4374_aef6_1a1d5f40a7ca.slice - libcontainer container kubepods-besteffort-podf1aedc09_28b8_4374_aef6_1a1d5f40a7ca.slice. Jul 12 00:08:33.407551 systemd[1]: Created slice kubepods-burstable-pod751d0e7c_ad8a_4efe_bafc_24b1a7d7ef96.slice - libcontainer container kubepods-burstable-pod751d0e7c_ad8a_4efe_bafc_24b1a7d7ef96.slice. Jul 12 00:08:33.426024 systemd[1]: Created slice kubepods-besteffort-pod9395332f_6218_4a3b_9efb_4b6737b7fd9d.slice - libcontainer container kubepods-besteffort-pod9395332f_6218_4a3b_9efb_4b6737b7fd9d.slice. Jul 12 00:08:33.452035 systemd[1]: Created slice kubepods-besteffort-poda2fe5d4f_2f51_400a_adc0_e4e9bcdc9e97.slice - libcontainer container kubepods-besteffort-poda2fe5d4f_2f51_400a_adc0_e4e9bcdc9e97.slice. Jul 12 00:08:33.468010 kubelet[3344]: I0712 00:08:33.464694 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96-config-volume\") pod \"coredns-668d6bf9bc-hscg5\" (UID: \"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96\") " pod="kube-system/coredns-668d6bf9bc-hscg5" Jul 12 00:08:33.468010 kubelet[3344]: I0712 00:08:33.464763 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdvk\" (UniqueName: \"kubernetes.io/projected/751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96-kube-api-access-tzdvk\") pod \"coredns-668d6bf9bc-hscg5\" (UID: \"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96\") " pod="kube-system/coredns-668d6bf9bc-hscg5" Jul 12 00:08:33.468010 kubelet[3344]: I0712 00:08:33.464823 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9395332f-6218-4a3b-9efb-4b6737b7fd9d-calico-apiserver-certs\") pod \"calico-apiserver-f6d8df55-scvhw\" (UID: \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\") " pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" Jul 12 00:08:33.468010 kubelet[3344]: I0712 00:08:33.464863 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n46tv\" (UniqueName: \"kubernetes.io/projected/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-kube-api-access-n46tv\") pod \"calico-apiserver-f6d8df55-xspm9\" (UID: \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\") " pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" Jul 12 00:08:33.468010 kubelet[3344]: I0712 00:08:33.464902 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba9bbb6e-9361-4175-929c-1ae629fa9bce-config\") pod \"goldmane-768f4c5c69-vcjmb\" (UID: \"ba9bbb6e-9361-4175-929c-1ae629fa9bce\") " pod="calico-system/goldmane-768f4c5c69-vcjmb" Jul 12 00:08:33.467424 systemd[1]: Created slice kubepods-besteffort-podba9bbb6e_9361_4175_929c_1ae629fa9bce.slice - libcontainer container kubepods-besteffort-podba9bbb6e_9361_4175_929c_1ae629fa9bce.slice. Jul 12 00:08:33.468803 kubelet[3344]: I0712 00:08:33.464963 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ba9bbb6e-9361-4175-929c-1ae629fa9bce-goldmane-key-pair\") pod \"goldmane-768f4c5c69-vcjmb\" (UID: \"ba9bbb6e-9361-4175-929c-1ae629fa9bce\") " pod="calico-system/goldmane-768f4c5c69-vcjmb" Jul 12 00:08:33.468803 kubelet[3344]: I0712 00:08:33.465030 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvgp\" (UniqueName: \"kubernetes.io/projected/ba9bbb6e-9361-4175-929c-1ae629fa9bce-kube-api-access-qxvgp\") pod \"goldmane-768f4c5c69-vcjmb\" (UID: \"ba9bbb6e-9361-4175-929c-1ae629fa9bce\") " pod="calico-system/goldmane-768f4c5c69-vcjmb" Jul 12 00:08:33.468803 kubelet[3344]: I0712 00:08:33.465092 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba9bbb6e-9361-4175-929c-1ae629fa9bce-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-vcjmb\" (UID: \"ba9bbb6e-9361-4175-929c-1ae629fa9bce\") " pod="calico-system/goldmane-768f4c5c69-vcjmb" Jul 12 00:08:33.468803 kubelet[3344]: I0712 00:08:33.465181 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94kr\" (UniqueName: \"kubernetes.io/projected/a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97-kube-api-access-m94kr\") pod \"calico-apiserver-55ff68f59d-twxd6\" (UID: \"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97\") " pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" Jul 12 00:08:33.468803 kubelet[3344]: I0712 00:08:33.465224 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-calico-apiserver-certs\") pod \"calico-apiserver-f6d8df55-xspm9\" (UID: \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\") " pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" Jul 12 00:08:33.469100 kubelet[3344]: I0712 00:08:33.465266 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97-calico-apiserver-certs\") pod \"calico-apiserver-55ff68f59d-twxd6\" (UID: \"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97\") " pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" Jul 12 00:08:33.469100 kubelet[3344]: I0712 00:08:33.465307 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzt84\" (UniqueName: \"kubernetes.io/projected/9395332f-6218-4a3b-9efb-4b6737b7fd9d-kube-api-access-wzt84\") pod \"calico-apiserver-f6d8df55-scvhw\" (UID: \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\") " pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" Jul 12 00:08:33.618588 containerd[2015]: time="2025-07-12T00:08:33.616702531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5c86,Uid:0725903c-a273-456a-a2eb-c24032ec4754,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:33.637870 containerd[2015]: time="2025-07-12T00:08:33.637774040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98d66569b-k8l5m,Uid:54897f3e-0cc0-4a5e-8247-2e1792f1abf8,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:33.675334 containerd[2015]: time="2025-07-12T00:08:33.675142280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596f7dcbbd-zlhwz,Uid:32232079-fc02-426e-a296-066d8c1e6445,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:33.694230 containerd[2015]: time="2025-07-12T00:08:33.694159292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-xspm9,Uid:f1aedc09-28b8-4374-aef6-1a1d5f40a7ca,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:33.718244 containerd[2015]: time="2025-07-12T00:08:33.718185356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hscg5,Uid:751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:33.741751 containerd[2015]: time="2025-07-12T00:08:33.741618416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-scvhw,Uid:9395332f-6218-4a3b-9efb-4b6737b7fd9d,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:33.783526 containerd[2015]: time="2025-07-12T00:08:33.783133808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vcjmb,Uid:ba9bbb6e-9361-4175-929c-1ae629fa9bce,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:33.783526 containerd[2015]: time="2025-07-12T00:08:33.783273860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55ff68f59d-twxd6,Uid:a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:08:34.140059 containerd[2015]: time="2025-07-12T00:08:34.139909074Z" level=info msg="shim disconnected" id=d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a namespace=k8s.io Jul 12 00:08:34.141080 containerd[2015]: time="2025-07-12T00:08:34.140180370Z" level=warning msg="cleaning up after shim disconnected" id=d13d2dc37bcf123c510df5761f5c2534ddd5195618544775fc4d26870f2a333a namespace=k8s.io Jul 12 00:08:34.141080 containerd[2015]: time="2025-07-12T00:08:34.140247426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:34.627304 systemd[1]: Created slice kubepods-besteffort-pod955d28b9_9d88_48e7_9db2_62374412839c.slice - libcontainer container kubepods-besteffort-pod955d28b9_9d88_48e7_9db2_62374412839c.slice. Jul 12 00:08:34.636482 containerd[2015]: time="2025-07-12T00:08:34.635173724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-748mg,Uid:955d28b9-9d88-48e7-9db2-62374412839c,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:34.816066 containerd[2015]: time="2025-07-12T00:08:34.816000297Z" level=error msg="Failed to destroy network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.817742 containerd[2015]: time="2025-07-12T00:08:34.817671933Z" level=error msg="Failed to destroy network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.819919 containerd[2015]: time="2025-07-12T00:08:34.819843285Z" level=error msg="encountered an error cleaning up failed sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.820182 containerd[2015]: time="2025-07-12T00:08:34.820137789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596f7dcbbd-zlhwz,Uid:32232079-fc02-426e-a296-066d8c1e6445,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.822317 kubelet[3344]: E0712 00:08:34.822257 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.822919 containerd[2015]: time="2025-07-12T00:08:34.822683949Z" level=error msg="Failed to destroy network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.825130 kubelet[3344]: E0712 00:08:34.823143 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" Jul 12 00:08:34.825130 kubelet[3344]: E0712 00:08:34.823227 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" Jul 12 00:08:34.825130 kubelet[3344]: E0712 00:08:34.823345 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596f7dcbbd-zlhwz_calico-system(32232079-fc02-426e-a296-066d8c1e6445)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596f7dcbbd-zlhwz_calico-system(32232079-fc02-426e-a296-066d8c1e6445)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" podUID="32232079-fc02-426e-a296-066d8c1e6445" Jul 12 00:08:34.825617 containerd[2015]: time="2025-07-12T00:08:34.823912665Z" level=error msg="Failed to destroy network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.827985 containerd[2015]: time="2025-07-12T00:08:34.826557405Z" level=error msg="encountered an error cleaning up failed sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.827985 containerd[2015]: time="2025-07-12T00:08:34.826688313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vcjmb,Uid:ba9bbb6e-9361-4175-929c-1ae629fa9bce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.827985 containerd[2015]: time="2025-07-12T00:08:34.827435001Z" level=error msg="encountered an error cleaning up failed sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.827985 containerd[2015]: time="2025-07-12T00:08:34.827620533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98d66569b-k8l5m,Uid:54897f3e-0cc0-4a5e-8247-2e1792f1abf8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.828392 kubelet[3344]: E0712 00:08:34.827342 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.828392 kubelet[3344]: E0712 00:08:34.827416 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vcjmb" Jul 12 00:08:34.830052 kubelet[3344]: E0712 00:08:34.828530 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vcjmb" Jul 12 00:08:34.830052 kubelet[3344]: E0712 00:08:34.829184 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.830191 containerd[2015]: time="2025-07-12T00:08:34.829537665Z" level=error msg="encountered an error cleaning up failed sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.830191 containerd[2015]: time="2025-07-12T00:08:34.829621065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-scvhw,Uid:9395332f-6218-4a3b-9efb-4b6737b7fd9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.832994 kubelet[3344]: E0712 00:08:34.828722 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-vcjmb_calico-system(ba9bbb6e-9361-4175-929c-1ae629fa9bce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-vcjmb_calico-system(ba9bbb6e-9361-4175-929c-1ae629fa9bce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-vcjmb" podUID="ba9bbb6e-9361-4175-929c-1ae629fa9bce" Jul 12 00:08:34.832994 kubelet[3344]: E0712 00:08:34.829253 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-98d66569b-k8l5m" Jul 12 00:08:34.832994 kubelet[3344]: E0712 00:08:34.831970 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-98d66569b-k8l5m" Jul 12 00:08:34.834355 kubelet[3344]: E0712 00:08:34.832188 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-98d66569b-k8l5m_calico-system(54897f3e-0cc0-4a5e-8247-2e1792f1abf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-98d66569b-k8l5m_calico-system(54897f3e-0cc0-4a5e-8247-2e1792f1abf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-98d66569b-k8l5m" podUID="54897f3e-0cc0-4a5e-8247-2e1792f1abf8" Jul 12 00:08:34.834355 kubelet[3344]: E0712 00:08:34.832580 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.834355 kubelet[3344]: E0712 00:08:34.832638 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" Jul 12 00:08:34.835316 kubelet[3344]: E0712 00:08:34.832669 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" Jul 12 00:08:34.835316 kubelet[3344]: E0712 00:08:34.832808 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f6d8df55-scvhw_calico-apiserver(9395332f-6218-4a3b-9efb-4b6737b7fd9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f6d8df55-scvhw_calico-apiserver(9395332f-6218-4a3b-9efb-4b6737b7fd9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" podUID="9395332f-6218-4a3b-9efb-4b6737b7fd9d" Jul 12 00:08:34.838488 containerd[2015]: time="2025-07-12T00:08:34.838091578Z" level=error msg="Failed to destroy network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.842442 containerd[2015]: time="2025-07-12T00:08:34.841184662Z" level=error msg="encountered an error cleaning up failed sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.842442 containerd[2015]: time="2025-07-12T00:08:34.842012902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-xspm9,Uid:f1aedc09-28b8-4374-aef6-1a1d5f40a7ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.843385 kubelet[3344]: E0712 00:08:34.843073 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.844326 kubelet[3344]: E0712 00:08:34.843411 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" Jul 12 00:08:34.844326 kubelet[3344]: E0712 00:08:34.843477 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" Jul 12 00:08:34.844326 kubelet[3344]: E0712 00:08:34.843563 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f6d8df55-xspm9_calico-apiserver(f1aedc09-28b8-4374-aef6-1a1d5f40a7ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f6d8df55-xspm9_calico-apiserver(f1aedc09-28b8-4374-aef6-1a1d5f40a7ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" podUID="f1aedc09-28b8-4374-aef6-1a1d5f40a7ca" Jul 12 00:08:34.846014 containerd[2015]: time="2025-07-12T00:08:34.845661934Z" level=error msg="Failed to destroy network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.850880 containerd[2015]: time="2025-07-12T00:08:34.849142594Z" level=error msg="encountered an error cleaning up failed sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.850880 containerd[2015]: time="2025-07-12T00:08:34.850342978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5c86,Uid:0725903c-a273-456a-a2eb-c24032ec4754,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.851374 kubelet[3344]: E0712 00:08:34.851302 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.851806 kubelet[3344]: E0712 00:08:34.851381 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5c86" Jul 12 00:08:34.851806 kubelet[3344]: E0712 00:08:34.851415 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5c86" Jul 12 00:08:34.851955 kubelet[3344]: E0712 00:08:34.851786 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z5c86_kube-system(0725903c-a273-456a-a2eb-c24032ec4754)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z5c86_kube-system(0725903c-a273-456a-a2eb-c24032ec4754)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z5c86" podUID="0725903c-a273-456a-a2eb-c24032ec4754" Jul 12 00:08:34.856068 containerd[2015]: time="2025-07-12T00:08:34.855662170Z" level=error msg="Failed to destroy network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.858773 containerd[2015]: time="2025-07-12T00:08:34.858544534Z" level=error msg="encountered an error cleaning up failed sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.859853 containerd[2015]: time="2025-07-12T00:08:34.858909946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hscg5,Uid:751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.859986 kubelet[3344]: E0712 00:08:34.859684 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.861647 kubelet[3344]: E0712 00:08:34.861544 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hscg5" Jul 12 00:08:34.861929 kubelet[3344]: E0712 00:08:34.861754 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hscg5" Jul 12 00:08:34.861929 kubelet[3344]: E0712 00:08:34.861847 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hscg5_kube-system(751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hscg5_kube-system(751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hscg5" podUID="751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96" Jul 12 00:08:34.865090 containerd[2015]: time="2025-07-12T00:08:34.864906058Z" level=error msg="Failed to destroy network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.866220 containerd[2015]: time="2025-07-12T00:08:34.866127922Z" level=error msg="encountered an error cleaning up failed sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.866354 containerd[2015]: time="2025-07-12T00:08:34.866246134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55ff68f59d-twxd6,Uid:a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.867903 kubelet[3344]: E0712 00:08:34.867525 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:34.867903 kubelet[3344]: E0712 00:08:34.867645 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" Jul 12 00:08:34.867903 kubelet[3344]: E0712 00:08:34.867704 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" Jul 12 00:08:34.868174 kubelet[3344]: E0712 00:08:34.867795 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55ff68f59d-twxd6_calico-apiserver(a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55ff68f59d-twxd6_calico-apiserver(a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" podUID="a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97" Jul 12 00:08:34.894588 kubelet[3344]: I0712 00:08:34.894015 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:34.899442 containerd[2015]: time="2025-07-12T00:08:34.899300494Z" level=info msg="StopPodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\"" Jul 12 00:08:34.900204 kubelet[3344]: I0712 00:08:34.900165 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:34.902052 containerd[2015]: time="2025-07-12T00:08:34.899880706Z" level=info msg="Ensure that sandbox 4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf in task-service has been cleanup successfully" Jul 12 00:08:34.904581 containerd[2015]: time="2025-07-12T00:08:34.904488214Z" level=info msg="StopPodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\"" Jul 12 00:08:34.905124 containerd[2015]: time="2025-07-12T00:08:34.904776346Z" level=info msg="Ensure that sandbox b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b in task-service has been cleanup successfully" Jul 12 00:08:34.910272 kubelet[3344]: I0712 00:08:34.910229 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:34.912298 containerd[2015]: time="2025-07-12T00:08:34.911866498Z" level=info msg="StopPodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\"" Jul 12 00:08:34.912298 containerd[2015]: time="2025-07-12T00:08:34.912169102Z" level=info msg="Ensure that sandbox 3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5 in task-service has been cleanup successfully" Jul 12 00:08:34.923586 kubelet[3344]: I0712 00:08:34.923387 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:34.925579 containerd[2015]: time="2025-07-12T00:08:34.925505722Z" level=info msg="StopPodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\"" Jul 12 00:08:34.925883 containerd[2015]: time="2025-07-12T00:08:34.925835074Z" level=info msg="Ensure that sandbox eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f in task-service has been cleanup successfully" Jul 12 00:08:34.935987 kubelet[3344]: I0712 00:08:34.935384 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:34.938345 containerd[2015]: time="2025-07-12T00:08:34.937628758Z" level=info msg="StopPodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\"" Jul 12 00:08:34.940761 kubelet[3344]: I0712 00:08:34.940719 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:34.943406 containerd[2015]: time="2025-07-12T00:08:34.943347730Z" level=info msg="Ensure that sandbox 815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295 in task-service has been cleanup successfully" Jul 12 00:08:34.948573 containerd[2015]: time="2025-07-12T00:08:34.948501586Z" level=info msg="StopPodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\"" Jul 12 00:08:34.952872 containerd[2015]: time="2025-07-12T00:08:34.952797898Z" level=info msg="Ensure that sandbox 7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6 in task-service has been cleanup successfully" Jul 12 00:08:34.988018 containerd[2015]: time="2025-07-12T00:08:34.987927034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:08:35.000672 kubelet[3344]: I0712 00:08:35.000137 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:08:35.017588 containerd[2015]: time="2025-07-12T00:08:35.017509974Z" level=info msg="StopPodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\"" Jul 12 00:08:35.017995 containerd[2015]: time="2025-07-12T00:08:35.017832582Z" level=info msg="Ensure that sandbox 42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d in task-service has been cleanup successfully" Jul 12 00:08:35.037081 kubelet[3344]: I0712 00:08:35.036181 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:35.040967 containerd[2015]: time="2025-07-12T00:08:35.040728475Z" level=info msg="StopPodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\"" Jul 12 00:08:35.048369 containerd[2015]: time="2025-07-12T00:08:35.048071143Z" level=info msg="Ensure that sandbox d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213 in task-service has been cleanup successfully" Jul 12 00:08:35.090854 containerd[2015]: time="2025-07-12T00:08:35.090408175Z" level=error msg="Failed to destroy network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.093864 containerd[2015]: time="2025-07-12T00:08:35.093083179Z" level=error msg="encountered an error cleaning up failed sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.093864 containerd[2015]: time="2025-07-12T00:08:35.093184531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-748mg,Uid:955d28b9-9d88-48e7-9db2-62374412839c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.094093 kubelet[3344]: E0712 00:08:35.093546 3344 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.094093 kubelet[3344]: E0712 00:08:35.093630 3344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:35.094093 kubelet[3344]: E0712 00:08:35.093665 3344 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-748mg" Jul 12 00:08:35.095979 kubelet[3344]: E0712 00:08:35.093769 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-748mg_calico-system(955d28b9-9d88-48e7-9db2-62374412839c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-748mg_calico-system(955d28b9-9d88-48e7-9db2-62374412839c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:35.169148 containerd[2015]: time="2025-07-12T00:08:35.168952459Z" level=error msg="StopPodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" failed" error="failed to destroy network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.172835 kubelet[3344]: E0712 00:08:35.169280 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:35.172835 kubelet[3344]: E0712 00:08:35.169367 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6"} Jul 12 00:08:35.172835 kubelet[3344]: E0712 00:08:35.169492 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba9bbb6e-9361-4175-929c-1ae629fa9bce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.172835 kubelet[3344]: E0712 00:08:35.169534 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba9bbb6e-9361-4175-929c-1ae629fa9bce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-vcjmb" podUID="ba9bbb6e-9361-4175-929c-1ae629fa9bce" Jul 12 00:08:35.197064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b-shm.mount: Deactivated successfully. Jul 12 00:08:35.197258 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5-shm.mount: Deactivated successfully. Jul 12 00:08:35.197393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6-shm.mount: Deactivated successfully. Jul 12 00:08:35.198613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f-shm.mount: Deactivated successfully. Jul 12 00:08:35.235060 containerd[2015]: time="2025-07-12T00:08:35.234909523Z" level=error msg="StopPodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" failed" error="failed to destroy network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.235683 kubelet[3344]: E0712 00:08:35.235530 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:08:35.235808 kubelet[3344]: E0712 00:08:35.235717 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d"} Jul 12 00:08:35.235897 kubelet[3344]: E0712 00:08:35.235846 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.236290 kubelet[3344]: E0712 00:08:35.235890 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" podUID="9395332f-6218-4a3b-9efb-4b6737b7fd9d" Jul 12 00:08:35.256139 containerd[2015]: time="2025-07-12T00:08:35.256073276Z" level=error msg="StopPodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" failed" error="failed to destroy network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.256441 containerd[2015]: time="2025-07-12T00:08:35.256073252Z" level=error msg="StopPodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" failed" error="failed to destroy network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.256669 kubelet[3344]: E0712 00:08:35.256599 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:35.256772 kubelet[3344]: E0712 00:08:35.256680 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f"} Jul 12 00:08:35.256772 kubelet[3344]: E0712 00:08:35.256740 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0725903c-a273-456a-a2eb-c24032ec4754\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.256964 kubelet[3344]: E0712 00:08:35.256792 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0725903c-a273-456a-a2eb-c24032ec4754\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z5c86" podUID="0725903c-a273-456a-a2eb-c24032ec4754" Jul 12 00:08:35.257496 kubelet[3344]: E0712 00:08:35.257124 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:35.257496 kubelet[3344]: E0712 00:08:35.257350 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf"} Jul 12 00:08:35.259327 kubelet[3344]: E0712 00:08:35.258972 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.259327 kubelet[3344]: E0712 00:08:35.259081 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" podUID="a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97" Jul 12 00:08:35.262538 containerd[2015]: time="2025-07-12T00:08:35.262026476Z" level=error msg="StopPodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" failed" error="failed to destroy network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.262682 kubelet[3344]: E0712 00:08:35.262513 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:35.262682 kubelet[3344]: E0712 00:08:35.262591 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5"} Jul 12 00:08:35.262682 kubelet[3344]: E0712 00:08:35.262645 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.262951 kubelet[3344]: E0712 00:08:35.262682 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-98d66569b-k8l5m" podUID="54897f3e-0cc0-4a5e-8247-2e1792f1abf8" Jul 12 00:08:35.267143 containerd[2015]: time="2025-07-12T00:08:35.267045824Z" level=error msg="StopPodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" failed" error="failed to destroy network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.268006 kubelet[3344]: E0712 00:08:35.267508 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:35.268006 kubelet[3344]: E0712 00:08:35.267610 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b"} Jul 12 00:08:35.268006 kubelet[3344]: E0712 00:08:35.267694 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32232079-fc02-426e-a296-066d8c1e6445\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.268006 kubelet[3344]: E0712 00:08:35.267759 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32232079-fc02-426e-a296-066d8c1e6445\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" podUID="32232079-fc02-426e-a296-066d8c1e6445" Jul 12 00:08:35.270697 containerd[2015]: time="2025-07-12T00:08:35.270587168Z" level=error msg="StopPodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" failed" error="failed to destroy network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.271359 kubelet[3344]: E0712 00:08:35.270932 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:35.271359 kubelet[3344]: E0712 00:08:35.271007 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295"} Jul 12 00:08:35.271359 kubelet[3344]: E0712 00:08:35.271064 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.271359 kubelet[3344]: E0712 00:08:35.271103 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hscg5" podUID="751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96" Jul 12 00:08:35.272125 containerd[2015]: time="2025-07-12T00:08:35.272060480Z" level=error msg="StopPodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" failed" error="failed to destroy network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:35.272760 kubelet[3344]: E0712 00:08:35.272358 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:35.272760 kubelet[3344]: E0712 00:08:35.272423 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213"} Jul 12 00:08:35.272760 kubelet[3344]: E0712 00:08:35.272528 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:35.272760 kubelet[3344]: E0712 00:08:35.272575 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" podUID="f1aedc09-28b8-4374-aef6-1a1d5f40a7ca" Jul 12 00:08:36.043002 kubelet[3344]: I0712 00:08:36.042965 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:36.046927 containerd[2015]: time="2025-07-12T00:08:36.045046915Z" level=info msg="StopPodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\"" Jul 12 00:08:36.046927 containerd[2015]: time="2025-07-12T00:08:36.045342307Z" level=info msg="Ensure that sandbox 46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22 in task-service has been cleanup successfully" Jul 12 00:08:36.092195 containerd[2015]: time="2025-07-12T00:08:36.091839536Z" level=error msg="StopPodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" failed" error="failed to destroy network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:08:36.092777 kubelet[3344]: E0712 00:08:36.092202 3344 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:36.093167 kubelet[3344]: E0712 00:08:36.092282 3344 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22"} Jul 12 00:08:36.093648 kubelet[3344]: E0712 00:08:36.093315 3344 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"955d28b9-9d88-48e7-9db2-62374412839c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:08:36.093648 kubelet[3344]: E0712 00:08:36.093406 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"955d28b9-9d88-48e7-9db2-62374412839c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-748mg" podUID="955d28b9-9d88-48e7-9db2-62374412839c" Jul 12 00:08:42.123955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058459152.mount: Deactivated successfully. Jul 12 00:08:42.198560 containerd[2015]: time="2025-07-12T00:08:42.198063590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:42.200406 containerd[2015]: time="2025-07-12T00:08:42.200329034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:08:42.202835 containerd[2015]: time="2025-07-12T00:08:42.202742078Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:42.207662 containerd[2015]: time="2025-07-12T00:08:42.207578870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:42.209350 containerd[2015]: time="2025-07-12T00:08:42.209150426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.221138072s" Jul 12 00:08:42.209350 containerd[2015]: time="2025-07-12T00:08:42.209211002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:08:42.249714 containerd[2015]: time="2025-07-12T00:08:42.249601754Z" level=info msg="CreateContainer within sandbox \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:08:42.307271 containerd[2015]: time="2025-07-12T00:08:42.307104987Z" level=info msg="CreateContainer within sandbox \"d7070102558028da7921e570c00d70327627c8166ab266d7e8b33eecbbfc369d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814\"" Jul 12 00:08:42.308665 containerd[2015]: time="2025-07-12T00:08:42.308557827Z" level=info msg="StartContainer for \"bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814\"" Jul 12 00:08:42.369802 systemd[1]: Started cri-containerd-bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814.scope - libcontainer container bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814. Jul 12 00:08:42.439748 containerd[2015]: time="2025-07-12T00:08:42.439323063Z" level=info msg="StartContainer for \"bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814\" returns successfully" Jul 12 00:08:42.698306 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:08:42.698538 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:08:42.899881 containerd[2015]: time="2025-07-12T00:08:42.898841190Z" level=info msg="StopPodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\"" Jul 12 00:08:43.144864 kubelet[3344]: I0712 00:08:43.144687 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dczhx" podStartSLOduration=1.9503722460000001 podStartE2EDuration="19.144331851s" podCreationTimestamp="2025-07-12 00:08:24 +0000 UTC" firstStartedPulling="2025-07-12 00:08:25.016816389 +0000 UTC m=+29.723658077" lastFinishedPulling="2025-07-12 00:08:42.210776006 +0000 UTC m=+46.917617682" observedRunningTime="2025-07-12 00:08:43.140500767 +0000 UTC m=+47.847342479" watchObservedRunningTime="2025-07-12 00:08:43.144331851 +0000 UTC m=+47.851173539" Jul 12 00:08:43.194867 systemd[1]: run-containerd-runc-k8s.io-bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814-runc.8uPUY4.mount: Deactivated successfully. Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.198 [INFO][4571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.200 [INFO][4571] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" iface="eth0" netns="/var/run/netns/cni-92d6e0c8-6e3e-74ba-6887-54274316cce5" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.201 [INFO][4571] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" iface="eth0" netns="/var/run/netns/cni-92d6e0c8-6e3e-74ba-6887-54274316cce5" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.203 [INFO][4571] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" iface="eth0" netns="/var/run/netns/cni-92d6e0c8-6e3e-74ba-6887-54274316cce5" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.203 [INFO][4571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.203 [INFO][4571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.313 [INFO][4594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.313 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.313 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.330 [WARNING][4594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.331 [INFO][4594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.341 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:43.354587 containerd[2015]: 2025-07-12 00:08:43.349 [INFO][4571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:43.356610 containerd[2015]: time="2025-07-12T00:08:43.355535092Z" level=info msg="TearDown network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" successfully" Jul 12 00:08:43.356610 containerd[2015]: time="2025-07-12T00:08:43.355584148Z" level=info msg="StopPodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" returns successfully" Jul 12 00:08:43.369702 systemd[1]: run-netns-cni\x2d92d6e0c8\x2d6e3e\x2d74ba\x2d6887\x2d54274316cce5.mount: Deactivated successfully. Jul 12 00:08:43.466098 kubelet[3344]: I0712 00:08:43.464727 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g52w\" (UniqueName: \"kubernetes.io/projected/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-kube-api-access-6g52w\") pod \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\" (UID: \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\") " Jul 12 00:08:43.466098 kubelet[3344]: I0712 00:08:43.464829 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-ca-bundle\") pod \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\" (UID: \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\") " Jul 12 00:08:43.466098 kubelet[3344]: I0712 00:08:43.464889 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-backend-key-pair\") pod \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\" (UID: \"54897f3e-0cc0-4a5e-8247-2e1792f1abf8\") " Jul 12 00:08:43.468953 kubelet[3344]: I0712 00:08:43.467580 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "54897f3e-0cc0-4a5e-8247-2e1792f1abf8" (UID: "54897f3e-0cc0-4a5e-8247-2e1792f1abf8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:08:43.477321 kubelet[3344]: I0712 00:08:43.475437 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "54897f3e-0cc0-4a5e-8247-2e1792f1abf8" (UID: "54897f3e-0cc0-4a5e-8247-2e1792f1abf8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:08:43.481852 systemd[1]: var-lib-kubelet-pods-54897f3e\x2d0cc0\x2d4a5e\x2d8247\x2d2e1792f1abf8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:08:43.485356 kubelet[3344]: I0712 00:08:43.485280 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-kube-api-access-6g52w" (OuterVolumeSpecName: "kube-api-access-6g52w") pod "54897f3e-0cc0-4a5e-8247-2e1792f1abf8" (UID: "54897f3e-0cc0-4a5e-8247-2e1792f1abf8"). InnerVolumeSpecName "kube-api-access-6g52w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:08:43.566346 kubelet[3344]: I0712 00:08:43.566279 3344 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-ca-bundle\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:08:43.566346 kubelet[3344]: I0712 00:08:43.566343 3344 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-whisker-backend-key-pair\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:08:43.566642 kubelet[3344]: I0712 00:08:43.566370 3344 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g52w\" (UniqueName: \"kubernetes.io/projected/54897f3e-0cc0-4a5e-8247-2e1792f1abf8-kube-api-access-6g52w\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:08:43.624255 systemd[1]: Removed slice kubepods-besteffort-pod54897f3e_0cc0_4a5e_8247_2e1792f1abf8.slice - libcontainer container kubepods-besteffort-pod54897f3e_0cc0_4a5e_8247_2e1792f1abf8.slice. Jul 12 00:08:44.133236 systemd[1]: var-lib-kubelet-pods-54897f3e\x2d0cc0\x2d4a5e\x2d8247\x2d2e1792f1abf8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6g52w.mount: Deactivated successfully. Jul 12 00:08:44.237354 systemd[1]: Created slice kubepods-besteffort-pod7f01a831_3a10_4478_bdae_4c67d7eebb43.slice - libcontainer container kubepods-besteffort-pod7f01a831_3a10_4478_bdae_4c67d7eebb43.slice. Jul 12 00:08:44.374792 kubelet[3344]: I0712 00:08:44.374348 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpzh\" (UniqueName: \"kubernetes.io/projected/7f01a831-3a10-4478-bdae-4c67d7eebb43-kube-api-access-9cpzh\") pod \"whisker-66dc8f8c8d-fx6zl\" (UID: \"7f01a831-3a10-4478-bdae-4c67d7eebb43\") " pod="calico-system/whisker-66dc8f8c8d-fx6zl" Jul 12 00:08:44.374792 kubelet[3344]: I0712 00:08:44.374442 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f01a831-3a10-4478-bdae-4c67d7eebb43-whisker-ca-bundle\") pod \"whisker-66dc8f8c8d-fx6zl\" (UID: \"7f01a831-3a10-4478-bdae-4c67d7eebb43\") " pod="calico-system/whisker-66dc8f8c8d-fx6zl" Jul 12 00:08:44.374792 kubelet[3344]: I0712 00:08:44.374684 3344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f01a831-3a10-4478-bdae-4c67d7eebb43-whisker-backend-key-pair\") pod \"whisker-66dc8f8c8d-fx6zl\" (UID: \"7f01a831-3a10-4478-bdae-4c67d7eebb43\") " pod="calico-system/whisker-66dc8f8c8d-fx6zl" Jul 12 00:08:44.545681 containerd[2015]: time="2025-07-12T00:08:44.545119326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dc8f8c8d-fx6zl,Uid:7f01a831-3a10-4478-bdae-4c67d7eebb43,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:44.907551 (udev-worker)[4556]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:44.909925 systemd-networkd[1936]: cali5bd57d8cf99: Link UP Jul 12 00:08:44.913870 systemd-networkd[1936]: cali5bd57d8cf99: Gained carrier Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.660 [INFO][4657] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.691 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0 whisker-66dc8f8c8d- calico-system 7f01a831-3a10-4478-bdae-4c67d7eebb43 955 0 2025-07-12 00:08:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66dc8f8c8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-25 whisker-66dc8f8c8d-fx6zl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5bd57d8cf99 [] [] }} ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.691 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.797 [INFO][4700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" HandleID="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Workload="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.797 [INFO][4700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" HandleID="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Workload="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000281aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-25", "pod":"whisker-66dc8f8c8d-fx6zl", "timestamp":"2025-07-12 00:08:44.797198971 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.797 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.797 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.797 [INFO][4700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.827 [INFO][4700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.842 [INFO][4700] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.851 [INFO][4700] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.856 [INFO][4700] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.860 [INFO][4700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.862 [INFO][4700] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.865 [INFO][4700] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.872 [INFO][4700] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.885 [INFO][4700] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.193/26] block=192.168.56.192/26 handle="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.886 [INFO][4700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.193/26] handle="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" host="ip-172-31-18-25" Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.886 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:44.950370 containerd[2015]: 2025-07-12 00:08:44.886 [INFO][4700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.193/26] IPv6=[] ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" HandleID="k8s-pod-network.88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Workload="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:44.965230 containerd[2015]: 2025-07-12 00:08:44.893 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0", GenerateName:"whisker-66dc8f8c8d-", Namespace:"calico-system", SelfLink:"", UID:"7f01a831-3a10-4478-bdae-4c67d7eebb43", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66dc8f8c8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"whisker-66dc8f8c8d-fx6zl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5bd57d8cf99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:44.965230 containerd[2015]: 2025-07-12 00:08:44.893 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.193/32] ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:44.965230 containerd[2015]: 2025-07-12 00:08:44.893 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bd57d8cf99 ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:44.965230 containerd[2015]: 2025-07-12 00:08:44.915 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:44.965230 containerd[2015]: 2025-07-12 00:08:44.918 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0", GenerateName:"whisker-66dc8f8c8d-", Namespace:"calico-system", SelfLink:"", UID:"7f01a831-3a10-4478-bdae-4c67d7eebb43", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66dc8f8c8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c", Pod:"whisker-66dc8f8c8d-fx6zl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5bd57d8cf99", MAC:"c2:7f:8b:0d:50:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:44.965230 containerd[2015]: 2025-07-12 00:08:44.943 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c" Namespace="calico-system" Pod="whisker-66dc8f8c8d-fx6zl" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--66dc8f8c8d--fx6zl-eth0" Jul 12 00:08:45.037176 containerd[2015]: time="2025-07-12T00:08:45.035589760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:45.042393 containerd[2015]: time="2025-07-12T00:08:45.037479172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:45.042393 containerd[2015]: time="2025-07-12T00:08:45.038916340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:45.042393 containerd[2015]: time="2025-07-12T00:08:45.039103540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:45.126824 systemd[1]: Started cri-containerd-88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c.scope - libcontainer container 88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c. Jul 12 00:08:45.301764 containerd[2015]: time="2025-07-12T00:08:45.301670825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dc8f8c8d-fx6zl,Uid:7f01a831-3a10-4478-bdae-4c67d7eebb43,Namespace:calico-system,Attempt:0,} returns sandbox id \"88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c\"" Jul 12 00:08:45.311301 containerd[2015]: time="2025-07-12T00:08:45.310422486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:08:45.618851 kubelet[3344]: I0712 00:08:45.618773 3344 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54897f3e-0cc0-4a5e-8247-2e1792f1abf8" path="/var/lib/kubelet/pods/54897f3e-0cc0-4a5e-8247-2e1792f1abf8/volumes" Jul 12 00:08:45.623104 containerd[2015]: time="2025-07-12T00:08:45.623042335Z" level=info msg="StopPodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\"" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.777 [INFO][4813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.778 [INFO][4813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" iface="eth0" netns="/var/run/netns/cni-2c76e801-5f5b-928e-ae38-570c3460d9f2" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.778 [INFO][4813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" iface="eth0" netns="/var/run/netns/cni-2c76e801-5f5b-928e-ae38-570c3460d9f2" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.779 [INFO][4813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" iface="eth0" netns="/var/run/netns/cni-2c76e801-5f5b-928e-ae38-570c3460d9f2" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.779 [INFO][4813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.779 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.868 [INFO][4824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.868 [INFO][4824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.868 [INFO][4824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.884 [WARNING][4824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.884 [INFO][4824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.889 [INFO][4824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:45.899486 containerd[2015]: 2025-07-12 00:08:45.892 [INFO][4813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:45.907962 containerd[2015]: time="2025-07-12T00:08:45.901642256Z" level=info msg="TearDown network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" successfully" Jul 12 00:08:45.907962 containerd[2015]: time="2025-07-12T00:08:45.901687976Z" level=info msg="StopPodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" returns successfully" Jul 12 00:08:45.907962 containerd[2015]: time="2025-07-12T00:08:45.903502064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vcjmb,Uid:ba9bbb6e-9361-4175-929c-1ae629fa9bce,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:45.912083 systemd[1]: run-netns-cni\x2d2c76e801\x2d5f5b\x2d928e\x2dae38\x2d570c3460d9f2.mount: Deactivated successfully. Jul 12 00:08:46.394495 kernel: bpftool[4873]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:08:46.448008 systemd-networkd[1936]: cali00c9a9d485a: Link UP Jul 12 00:08:46.453211 systemd-networkd[1936]: cali00c9a9d485a: Gained carrier Jul 12 00:08:46.487145 systemd-networkd[1936]: cali5bd57d8cf99: Gained IPv6LL Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.219 [INFO][4835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0 goldmane-768f4c5c69- calico-system ba9bbb6e-9361-4175-929c-1ae629fa9bce 964 0 2025-07-12 00:08:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-25 goldmane-768f4c5c69-vcjmb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali00c9a9d485a [] [] }} ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.220 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.340 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" HandleID="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.340 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" HandleID="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-25", "pod":"goldmane-768f4c5c69-vcjmb", "timestamp":"2025-07-12 00:08:46.340111855 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.340 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.340 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.341 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.360 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.370 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.384 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.389 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.394 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.394 [INFO][4851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.397 [INFO][4851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44 Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.406 [INFO][4851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.419 [INFO][4851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.194/26] block=192.168.56.192/26 handle="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.419 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.194/26] handle="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" host="ip-172-31-18-25" Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.419 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:46.509090 containerd[2015]: 2025-07-12 00:08:46.419 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.194/26] IPv6=[] ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" HandleID="k8s-pod-network.8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.511368 containerd[2015]: 2025-07-12 00:08:46.424 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ba9bbb6e-9361-4175-929c-1ae629fa9bce", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"goldmane-768f4c5c69-vcjmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00c9a9d485a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:46.511368 containerd[2015]: 2025-07-12 00:08:46.425 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.194/32] ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.511368 containerd[2015]: 2025-07-12 00:08:46.425 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00c9a9d485a ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.511368 containerd[2015]: 2025-07-12 00:08:46.447 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.511368 containerd[2015]: 2025-07-12 00:08:46.454 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ba9bbb6e-9361-4175-929c-1ae629fa9bce", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44", Pod:"goldmane-768f4c5c69-vcjmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00c9a9d485a", MAC:"c6:22:a0:fe:f2:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:46.511368 containerd[2015]: 2025-07-12 00:08:46.501 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44" Namespace="calico-system" Pod="goldmane-768f4c5c69-vcjmb" WorkloadEndpoint="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:46.593576 containerd[2015]: time="2025-07-12T00:08:46.592825616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:46.593576 containerd[2015]: time="2025-07-12T00:08:46.593048156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:46.593576 containerd[2015]: time="2025-07-12T00:08:46.593074004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:46.593576 containerd[2015]: time="2025-07-12T00:08:46.593224220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:46.618808 containerd[2015]: time="2025-07-12T00:08:46.618750212Z" level=info msg="StopPodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\"" Jul 12 00:08:46.625350 containerd[2015]: time="2025-07-12T00:08:46.625276724Z" level=info msg="StopPodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\"" Jul 12 00:08:46.627768 containerd[2015]: time="2025-07-12T00:08:46.626659724Z" level=info msg="StopPodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\"" Jul 12 00:08:46.627915 containerd[2015]: time="2025-07-12T00:08:46.627110756Z" level=info msg="StopPodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\"" Jul 12 00:08:46.760813 systemd[1]: Started cri-containerd-8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44.scope - libcontainer container 8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44. Jul 12 00:08:47.024440 containerd[2015]: time="2025-07-12T00:08:47.023697570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vcjmb,Uid:ba9bbb6e-9361-4175-929c-1ae629fa9bce,Namespace:calico-system,Attempt:1,} returns sandbox id \"8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44\"" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.047 [INFO][4949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.047 [INFO][4949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" iface="eth0" netns="/var/run/netns/cni-99b50662-66d9-7da5-6fdd-69601e1fa6c4" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.047 [INFO][4949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" iface="eth0" netns="/var/run/netns/cni-99b50662-66d9-7da5-6fdd-69601e1fa6c4" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.049 [INFO][4949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" iface="eth0" netns="/var/run/netns/cni-99b50662-66d9-7da5-6fdd-69601e1fa6c4" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.051 [INFO][4949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.051 [INFO][4949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.256 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.256 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.256 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.296 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.296 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.301 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:47.325307 containerd[2015]: 2025-07-12 00:08:47.313 [INFO][4949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:47.330818 containerd[2015]: time="2025-07-12T00:08:47.327418952Z" level=info msg="TearDown network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" successfully" Jul 12 00:08:47.330818 containerd[2015]: time="2025-07-12T00:08:47.327536660Z" level=info msg="StopPodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" returns successfully" Jul 12 00:08:47.337080 systemd[1]: run-netns-cni\x2d99b50662\x2d66d9\x2d7da5\x2d6fdd\x2d69601e1fa6c4.mount: Deactivated successfully. Jul 12 00:08:47.355497 containerd[2015]: time="2025-07-12T00:08:47.355000496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hscg5,Uid:751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.147 [INFO][4939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.148 [INFO][4939] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" iface="eth0" netns="/var/run/netns/cni-7fc631a2-b5d4-db10-6c0f-9d32a0f78fad" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.150 [INFO][4939] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" iface="eth0" netns="/var/run/netns/cni-7fc631a2-b5d4-db10-6c0f-9d32a0f78fad" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.156 [INFO][4939] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" iface="eth0" netns="/var/run/netns/cni-7fc631a2-b5d4-db10-6c0f-9d32a0f78fad" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.157 [INFO][4939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.160 [INFO][4939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.337 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.342 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.342 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.378 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.380 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.389 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:47.413383 containerd[2015]: 2025-07-12 00:08:47.399 [INFO][4939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:47.421562 containerd[2015]: time="2025-07-12T00:08:47.413486132Z" level=info msg="TearDown network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" successfully" Jul 12 00:08:47.421562 containerd[2015]: time="2025-07-12T00:08:47.413526176Z" level=info msg="StopPodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" returns successfully" Jul 12 00:08:47.441032 systemd[1]: run-netns-cni\x2d7fc631a2\x2db5d4\x2ddb10\x2d6c0f\x2d9d32a0f78fad.mount: Deactivated successfully. Jul 12 00:08:47.443351 containerd[2015]: time="2025-07-12T00:08:47.442796852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596f7dcbbd-zlhwz,Uid:32232079-fc02-426e-a296-066d8c1e6445,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:47.467606 systemd-networkd[1936]: vxlan.calico: Link UP Jul 12 00:08:47.467642 systemd-networkd[1936]: vxlan.calico: Gained carrier Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.134 [INFO][4953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.138 [INFO][4953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" iface="eth0" netns="/var/run/netns/cni-18da466b-0fc2-00da-3e8c-ca151e018403" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.138 [INFO][4953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" iface="eth0" netns="/var/run/netns/cni-18da466b-0fc2-00da-3e8c-ca151e018403" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.148 [INFO][4953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" iface="eth0" netns="/var/run/netns/cni-18da466b-0fc2-00da-3e8c-ca151e018403" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.148 [INFO][4953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.148 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.359 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.362 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.389 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.453 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.453 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.462 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:47.511747 containerd[2015]: 2025-07-12 00:08:47.505 [INFO][4953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:47.514941 containerd[2015]: time="2025-07-12T00:08:47.514720028Z" level=info msg="TearDown network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" successfully" Jul 12 00:08:47.517516 containerd[2015]: time="2025-07-12T00:08:47.514776860Z" level=info msg="StopPodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" returns successfully" Jul 12 00:08:47.529876 containerd[2015]: time="2025-07-12T00:08:47.529812441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55ff68f59d-twxd6,Uid:a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.193 [INFO][4952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.193 [INFO][4952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" iface="eth0" netns="/var/run/netns/cni-9cf89160-733e-b0c9-cfe4-07cc328d600b" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.195 [INFO][4952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" iface="eth0" netns="/var/run/netns/cni-9cf89160-733e-b0c9-cfe4-07cc328d600b" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.202 [INFO][4952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" iface="eth0" netns="/var/run/netns/cni-9cf89160-733e-b0c9-cfe4-07cc328d600b" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.202 [INFO][4952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.202 [INFO][4952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.410 [INFO][5019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.412 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.464 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.531 [WARNING][5019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.531 [INFO][5019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.536 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:47.559475 containerd[2015]: 2025-07-12 00:08:47.549 [INFO][4952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:08:47.562643 containerd[2015]: time="2025-07-12T00:08:47.562428117Z" level=info msg="TearDown network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" successfully" Jul 12 00:08:47.563113 containerd[2015]: time="2025-07-12T00:08:47.563042397Z" level=info msg="StopPodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" returns successfully" Jul 12 00:08:47.566610 containerd[2015]: time="2025-07-12T00:08:47.565743453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-scvhw,Uid:9395332f-6218-4a3b-9efb-4b6737b7fd9d,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:47.616297 systemd[1]: run-netns-cni\x2d18da466b\x2d0fc2\x2d00da\x2d3e8c\x2dca151e018403.mount: Deactivated successfully. Jul 12 00:08:47.616730 systemd[1]: run-netns-cni\x2d9cf89160\x2d733e\x2db0c9\x2dcfe4\x2d07cc328d600b.mount: Deactivated successfully. Jul 12 00:08:47.643308 (udev-worker)[4554]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:48.087052 systemd-networkd[1936]: cali00c9a9d485a: Gained IPv6LL Jul 12 00:08:48.271581 systemd-networkd[1936]: cali86846d2c1a9: Link UP Jul 12 00:08:48.294715 systemd-networkd[1936]: cali86846d2c1a9: Gained carrier Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:47.810 [INFO][5036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0 coredns-668d6bf9bc- kube-system 751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96 975 0 2025-07-12 00:07:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-25 coredns-668d6bf9bc-hscg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86846d2c1a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:47.811 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.037 [INFO][5099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" HandleID="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.039 [INFO][5099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" HandleID="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030b480), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-25", "pod":"coredns-668d6bf9bc-hscg5", "timestamp":"2025-07-12 00:08:48.037374007 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.040 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.040 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.040 [INFO][5099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.075 [INFO][5099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.103 [INFO][5099] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.128 [INFO][5099] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.145 [INFO][5099] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.155 [INFO][5099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.156 [INFO][5099] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.162 [INFO][5099] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9 Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.181 [INFO][5099] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.202 [INFO][5099] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.195/26] block=192.168.56.192/26 handle="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.203 [INFO][5099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.195/26] handle="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" host="ip-172-31-18-25" Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.205 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:48.397882 containerd[2015]: 2025-07-12 00:08:48.210 [INFO][5099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.195/26] IPv6=[] ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" HandleID="k8s-pod-network.3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.403367 containerd[2015]: 2025-07-12 00:08:48.233 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"coredns-668d6bf9bc-hscg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86846d2c1a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:48.403367 containerd[2015]: 2025-07-12 00:08:48.234 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.195/32] ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.403367 containerd[2015]: 2025-07-12 00:08:48.234 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86846d2c1a9 ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.403367 containerd[2015]: 2025-07-12 00:08:48.299 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.403367 containerd[2015]: 2025-07-12 00:08:48.315 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9", Pod:"coredns-668d6bf9bc-hscg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86846d2c1a9", MAC:"66:1a:a6:f7:13:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:48.403367 containerd[2015]: 2025-07-12 00:08:48.381 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9" Namespace="kube-system" Pod="coredns-668d6bf9bc-hscg5" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:48.430968 systemd-networkd[1936]: calid5974a7b5b9: Link UP Jul 12 00:08:48.431416 systemd-networkd[1936]: calid5974a7b5b9: Gained carrier Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:47.863 [INFO][5048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0 calico-kube-controllers-596f7dcbbd- calico-system 32232079-fc02-426e-a296-066d8c1e6445 978 0 2025-07-12 00:08:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596f7dcbbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-25 calico-kube-controllers-596f7dcbbd-zlhwz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid5974a7b5b9 [] [] }} ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:47.865 [INFO][5048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.180 [INFO][5105] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" HandleID="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.181 [INFO][5105] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" HandleID="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000371ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-25", "pod":"calico-kube-controllers-596f7dcbbd-zlhwz", "timestamp":"2025-07-12 00:08:48.18089066 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.181 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.203 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.204 [INFO][5105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.240 [INFO][5105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.263 [INFO][5105] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.305 [INFO][5105] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.314 [INFO][5105] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.322 [INFO][5105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.322 [INFO][5105] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.326 [INFO][5105] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191 Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.354 [INFO][5105] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.390 [INFO][5105] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.196/26] block=192.168.56.192/26 handle="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.391 [INFO][5105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.196/26] handle="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" host="ip-172-31-18-25" Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.391 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:48.477613 containerd[2015]: 2025-07-12 00:08:48.391 [INFO][5105] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.196/26] IPv6=[] ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" HandleID="k8s-pod-network.9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.479227 containerd[2015]: 2025-07-12 00:08:48.416 [INFO][5048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0", GenerateName:"calico-kube-controllers-596f7dcbbd-", Namespace:"calico-system", SelfLink:"", UID:"32232079-fc02-426e-a296-066d8c1e6445", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596f7dcbbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"calico-kube-controllers-596f7dcbbd-zlhwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5974a7b5b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:48.479227 containerd[2015]: 2025-07-12 00:08:48.416 [INFO][5048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.196/32] ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.479227 containerd[2015]: 2025-07-12 00:08:48.416 [INFO][5048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5974a7b5b9 ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.479227 containerd[2015]: 2025-07-12 00:08:48.423 [INFO][5048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.479227 containerd[2015]: 2025-07-12 00:08:48.424 [INFO][5048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0", GenerateName:"calico-kube-controllers-596f7dcbbd-", Namespace:"calico-system", SelfLink:"", UID:"32232079-fc02-426e-a296-066d8c1e6445", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596f7dcbbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191", Pod:"calico-kube-controllers-596f7dcbbd-zlhwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5974a7b5b9", MAC:"4a:fb:db:b2:93:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:48.479227 containerd[2015]: 2025-07-12 00:08:48.451 [INFO][5048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191" Namespace="calico-system" Pod="calico-kube-controllers-596f7dcbbd-zlhwz" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:48.522514 containerd[2015]: time="2025-07-12T00:08:48.512124369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:48.523002 containerd[2015]: time="2025-07-12T00:08:48.522886377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:08:48.526601 containerd[2015]: time="2025-07-12T00:08:48.526501869Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:48.553753 containerd[2015]: time="2025-07-12T00:08:48.549225190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:48.556288 containerd[2015]: time="2025-07-12T00:08:48.555950182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 3.244507696s" Jul 12 00:08:48.557749 containerd[2015]: time="2025-07-12T00:08:48.557396458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:08:48.567177 containerd[2015]: time="2025-07-12T00:08:48.567116722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:08:48.594376 containerd[2015]: time="2025-07-12T00:08:48.591183874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:48.594376 containerd[2015]: time="2025-07-12T00:08:48.591295714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:48.594376 containerd[2015]: time="2025-07-12T00:08:48.591332698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:48.595778 containerd[2015]: time="2025-07-12T00:08:48.595709386Z" level=info msg="CreateContainer within sandbox \"88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:08:48.608229 containerd[2015]: time="2025-07-12T00:08:48.597501226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:48.613652 containerd[2015]: time="2025-07-12T00:08:48.613042558Z" level=info msg="StopPodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\"" Jul 12 00:08:48.667015 containerd[2015]: time="2025-07-12T00:08:48.661353982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:48.667015 containerd[2015]: time="2025-07-12T00:08:48.661539514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:48.667015 containerd[2015]: time="2025-07-12T00:08:48.661668034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:48.667015 containerd[2015]: time="2025-07-12T00:08:48.661843990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:48.711553 systemd-networkd[1936]: calic7c9f353034: Link UP Jul 12 00:08:48.737047 systemd-networkd[1936]: calic7c9f353034: Gained carrier Jul 12 00:08:48.768078 containerd[2015]: time="2025-07-12T00:08:48.766718639Z" level=info msg="CreateContainer within sandbox \"88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a4dd080deda86138ccd19ef2efd3ffb177457e905baea3539b796039cf696697\"" Jul 12 00:08:48.782042 containerd[2015]: time="2025-07-12T00:08:48.781795367Z" level=info msg="StartContainer for \"a4dd080deda86138ccd19ef2efd3ffb177457e905baea3539b796039cf696697\"" Jul 12 00:08:48.803897 systemd[1]: Started cri-containerd-3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9.scope - libcontainer container 3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9. Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:47.987 [INFO][5074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0 calico-apiserver-55ff68f59d- calico-apiserver a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97 977 0 2025-07-12 00:08:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55ff68f59d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-25 calico-apiserver-55ff68f59d-twxd6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7c9f353034 [] [] }} ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:47.989 [INFO][5074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.318 [INFO][5119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" HandleID="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.332 [INFO][5119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" HandleID="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034ae20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-25", "pod":"calico-apiserver-55ff68f59d-twxd6", "timestamp":"2025-07-12 00:08:48.318773012 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.332 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.391 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.391 [INFO][5119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.443 [INFO][5119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.468 [INFO][5119] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.509 [INFO][5119] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.517 [INFO][5119] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.530 [INFO][5119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.532 [INFO][5119] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.539 [INFO][5119] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7 Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.563 [INFO][5119] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.600 [INFO][5119] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.197/26] block=192.168.56.192/26 handle="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.600 [INFO][5119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.197/26] handle="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" host="ip-172-31-18-25" Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.603 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:48.820786 containerd[2015]: 2025-07-12 00:08:48.604 [INFO][5119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.197/26] IPv6=[] ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" HandleID="k8s-pod-network.3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.823338 containerd[2015]: 2025-07-12 00:08:48.657 [INFO][5074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0", GenerateName:"calico-apiserver-55ff68f59d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55ff68f59d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"calico-apiserver-55ff68f59d-twxd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7c9f353034", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:48.823338 containerd[2015]: 2025-07-12 00:08:48.657 [INFO][5074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.197/32] ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.823338 containerd[2015]: 2025-07-12 00:08:48.657 [INFO][5074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7c9f353034 ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.823338 containerd[2015]: 2025-07-12 00:08:48.728 [INFO][5074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.823338 containerd[2015]: 2025-07-12 00:08:48.739 [INFO][5074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0", GenerateName:"calico-apiserver-55ff68f59d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55ff68f59d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7", Pod:"calico-apiserver-55ff68f59d-twxd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7c9f353034", MAC:"82:79:7a:11:bf:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:48.823338 containerd[2015]: 2025-07-12 00:08:48.794 [INFO][5074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7" Namespace="calico-apiserver" Pod="calico-apiserver-55ff68f59d-twxd6" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:48.888786 systemd[1]: Started cri-containerd-9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191.scope - libcontainer container 9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191. Jul 12 00:08:48.918334 systemd-networkd[1936]: vxlan.calico: Gained IPv6LL Jul 12 00:08:48.939481 systemd-networkd[1936]: calidd3f6ed32e2: Link UP Jul 12 00:08:48.951770 systemd-networkd[1936]: calidd3f6ed32e2: Gained carrier Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.063 [INFO][5076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0 calico-apiserver-f6d8df55- calico-apiserver 9395332f-6218-4a3b-9efb-4b6737b7fd9d 979 0 2025-07-12 00:08:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f6d8df55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-25 calico-apiserver-f6d8df55-scvhw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd3f6ed32e2 [] [] }} ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.063 [INFO][5076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.346 [INFO][5126] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.346 [INFO][5126] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000343380), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-25", "pod":"calico-apiserver-f6d8df55-scvhw", "timestamp":"2025-07-12 00:08:48.346015905 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.347 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.603 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.604 [INFO][5126] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.694 [INFO][5126] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.743 [INFO][5126] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.791 [INFO][5126] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.802 [INFO][5126] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.824 [INFO][5126] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.824 [INFO][5126] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.833 [INFO][5126] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.851 [INFO][5126] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.890 [INFO][5126] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.198/26] block=192.168.56.192/26 handle="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.890 [INFO][5126] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.198/26] handle="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" host="ip-172-31-18-25" Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.890 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:49.026707 containerd[2015]: 2025-07-12 00:08:48.890 [INFO][5126] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.198/26] IPv6=[] ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.028738 containerd[2015]: 2025-07-12 00:08:48.914 [INFO][5076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9395332f-6218-4a3b-9efb-4b6737b7fd9d", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"calico-apiserver-f6d8df55-scvhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd3f6ed32e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:49.028738 containerd[2015]: 2025-07-12 00:08:48.915 [INFO][5076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.198/32] ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.028738 containerd[2015]: 2025-07-12 00:08:48.915 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd3f6ed32e2 ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.028738 containerd[2015]: 2025-07-12 00:08:48.959 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.028738 containerd[2015]: 2025-07-12 00:08:48.963 [INFO][5076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9395332f-6218-4a3b-9efb-4b6737b7fd9d", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f", Pod:"calico-apiserver-f6d8df55-scvhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd3f6ed32e2", MAC:"46:78:e3:71:d7:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:49.028738 containerd[2015]: 2025-07-12 00:08:49.016 [INFO][5076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-scvhw" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:08:49.049889 containerd[2015]: time="2025-07-12T00:08:49.041134604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:49.049889 containerd[2015]: time="2025-07-12T00:08:49.041335484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:49.049889 containerd[2015]: time="2025-07-12T00:08:49.041378312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.049889 containerd[2015]: time="2025-07-12T00:08:49.041624048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.105186 systemd[1]: Started cri-containerd-a4dd080deda86138ccd19ef2efd3ffb177457e905baea3539b796039cf696697.scope - libcontainer container a4dd080deda86138ccd19ef2efd3ffb177457e905baea3539b796039cf696697. Jul 12 00:08:49.203894 systemd[1]: Started cri-containerd-3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7.scope - libcontainer container 3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7. Jul 12 00:08:49.243070 containerd[2015]: time="2025-07-12T00:08:49.242661597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hscg5,Uid:751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9\"" Jul 12 00:08:49.254255 containerd[2015]: time="2025-07-12T00:08:49.254097861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:49.254255 containerd[2015]: time="2025-07-12T00:08:49.254202153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:49.257533 containerd[2015]: time="2025-07-12T00:08:49.254229813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.257533 containerd[2015]: time="2025-07-12T00:08:49.254394753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:49.268483 containerd[2015]: time="2025-07-12T00:08:49.267288705Z" level=info msg="CreateContainer within sandbox \"3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:49.354435 containerd[2015]: time="2025-07-12T00:08:49.354350854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596f7dcbbd-zlhwz,Uid:32232079-fc02-426e-a296-066d8c1e6445,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191\"" Jul 12 00:08:49.399487 containerd[2015]: time="2025-07-12T00:08:49.398721418Z" level=info msg="CreateContainer within sandbox \"3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1687b1b94e4c336d1f4aa931645997755201047449f667b44cf3efb839650750\"" Jul 12 00:08:49.421915 containerd[2015]: time="2025-07-12T00:08:49.420708982Z" level=info msg="StartContainer for \"1687b1b94e4c336d1f4aa931645997755201047449f667b44cf3efb839650750\"" Jul 12 00:08:49.434537 systemd[1]: Started cri-containerd-4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f.scope - libcontainer container 4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f. Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.310 [INFO][5221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.315 [INFO][5221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" iface="eth0" netns="/var/run/netns/cni-9ee340c4-a81b-162c-2d11-05e23999965c" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.320 [INFO][5221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" iface="eth0" netns="/var/run/netns/cni-9ee340c4-a81b-162c-2d11-05e23999965c" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.323 [INFO][5221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" iface="eth0" netns="/var/run/netns/cni-9ee340c4-a81b-162c-2d11-05e23999965c" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.323 [INFO][5221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.323 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.453 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.455 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.455 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.482 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.483 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.489 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:49.544620 containerd[2015]: 2025-07-12 00:08:49.502 [INFO][5221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:49.550792 containerd[2015]: time="2025-07-12T00:08:49.550067231Z" level=info msg="TearDown network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" successfully" Jul 12 00:08:49.550792 containerd[2015]: time="2025-07-12T00:08:49.550128911Z" level=info msg="StopPodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" returns successfully" Jul 12 00:08:49.556014 containerd[2015]: time="2025-07-12T00:08:49.555921959Z" level=info msg="StartContainer for \"a4dd080deda86138ccd19ef2efd3ffb177457e905baea3539b796039cf696697\" returns successfully" Jul 12 00:08:49.559409 containerd[2015]: time="2025-07-12T00:08:49.559143695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-xspm9,Uid:f1aedc09-28b8-4374-aef6-1a1d5f40a7ca,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:49.623969 containerd[2015]: time="2025-07-12T00:08:49.616880675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55ff68f59d-twxd6,Uid:a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7\"" Jul 12 00:08:49.618433 systemd[1]: Started cri-containerd-1687b1b94e4c336d1f4aa931645997755201047449f667b44cf3efb839650750.scope - libcontainer container 1687b1b94e4c336d1f4aa931645997755201047449f667b44cf3efb839650750. Jul 12 00:08:49.625365 systemd-networkd[1936]: cali86846d2c1a9: Gained IPv6LL Jul 12 00:08:49.646716 systemd[1]: run-netns-cni\x2d9ee340c4\x2da81b\x2d162c\x2d2d11\x2d05e23999965c.mount: Deactivated successfully. Jul 12 00:08:49.780845 containerd[2015]: time="2025-07-12T00:08:49.780398412Z" level=info msg="StartContainer for \"1687b1b94e4c336d1f4aa931645997755201047449f667b44cf3efb839650750\" returns successfully" Jul 12 00:08:49.928644 containerd[2015]: time="2025-07-12T00:08:49.928378620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-scvhw,Uid:9395332f-6218-4a3b-9efb-4b6737b7fd9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\"" Jul 12 00:08:50.006605 systemd-networkd[1936]: calid5974a7b5b9: Gained IPv6LL Jul 12 00:08:50.131373 systemd-networkd[1936]: calif45ae8c1942: Link UP Jul 12 00:08:50.134181 systemd-networkd[1936]: calif45ae8c1942: Gained carrier Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:49.917 [INFO][5437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0 calico-apiserver-f6d8df55- calico-apiserver f1aedc09-28b8-4374-aef6-1a1d5f40a7ca 1001 0 2025-07-12 00:08:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f6d8df55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-25 calico-apiserver-f6d8df55-xspm9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif45ae8c1942 [] [] }} ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:49.917 [INFO][5437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.004 [INFO][5470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.004 [INFO][5470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-25", "pod":"calico-apiserver-f6d8df55-xspm9", "timestamp":"2025-07-12 00:08:50.004401141 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.004 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.004 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.004 [INFO][5470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.048 [INFO][5470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.067 [INFO][5470] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.079 [INFO][5470] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.085 [INFO][5470] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.090 [INFO][5470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.090 [INFO][5470] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.093 [INFO][5470] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.103 [INFO][5470] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.118 [INFO][5470] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.199/26] block=192.168.56.192/26 handle="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.118 [INFO][5470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.199/26] handle="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" host="ip-172-31-18-25" Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.119 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:50.193661 containerd[2015]: 2025-07-12 00:08:50.119 [INFO][5470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.199/26] IPv6=[] ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.197694 containerd[2015]: 2025-07-12 00:08:50.124 [INFO][5437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"calico-apiserver-f6d8df55-xspm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif45ae8c1942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:50.197694 containerd[2015]: 2025-07-12 00:08:50.124 [INFO][5437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.199/32] ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.197694 containerd[2015]: 2025-07-12 00:08:50.124 [INFO][5437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif45ae8c1942 ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.197694 containerd[2015]: 2025-07-12 00:08:50.141 [INFO][5437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.197694 containerd[2015]: 2025-07-12 00:08:50.144 [INFO][5437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e", Pod:"calico-apiserver-f6d8df55-xspm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif45ae8c1942", MAC:"52:3e:b8:aa:63:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:50.197694 containerd[2015]: 2025-07-12 00:08:50.182 [INFO][5437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Namespace="calico-apiserver" Pod="calico-apiserver-f6d8df55-xspm9" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:50.269762 kubelet[3344]: I0712 00:08:50.269220 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hscg5" podStartSLOduration=51.269183146 podStartE2EDuration="51.269183146s" podCreationTimestamp="2025-07-12 00:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:50.265968646 +0000 UTC m=+54.972810346" watchObservedRunningTime="2025-07-12 00:08:50.269183146 +0000 UTC m=+54.976024834" Jul 12 00:08:50.290847 containerd[2015]: time="2025-07-12T00:08:50.289694110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:50.290847 containerd[2015]: time="2025-07-12T00:08:50.289862758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:50.290847 containerd[2015]: time="2025-07-12T00:08:50.289921186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:50.293079 containerd[2015]: time="2025-07-12T00:08:50.292803022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:50.370502 systemd[1]: Started cri-containerd-12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e.scope - libcontainer container 12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e. Jul 12 00:08:50.389796 systemd-networkd[1936]: calidd3f6ed32e2: Gained IPv6LL Jul 12 00:08:50.497749 containerd[2015]: time="2025-07-12T00:08:50.497425019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6d8df55-xspm9,Uid:f1aedc09-28b8-4374-aef6-1a1d5f40a7ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\"" Jul 12 00:08:50.521304 systemd-networkd[1936]: calic7c9f353034: Gained IPv6LL Jul 12 00:08:50.611710 containerd[2015]: time="2025-07-12T00:08:50.611339460Z" level=info msg="StopPodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\"" Jul 12 00:08:50.612702 containerd[2015]: time="2025-07-12T00:08:50.612169200Z" level=info msg="StopPodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\"" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.736 [INFO][5546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.741 [INFO][5546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" iface="eth0" netns="/var/run/netns/cni-b7c519ab-b6bd-bc5e-29da-0b1061cb5ca4" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.744 [INFO][5546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" iface="eth0" netns="/var/run/netns/cni-b7c519ab-b6bd-bc5e-29da-0b1061cb5ca4" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.744 [INFO][5546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" iface="eth0" netns="/var/run/netns/cni-b7c519ab-b6bd-bc5e-29da-0b1061cb5ca4" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.745 [INFO][5546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.745 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.815 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.815 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.815 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.830 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.830 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.834 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:50.841839 containerd[2015]: 2025-07-12 00:08:50.838 [INFO][5546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:50.845894 containerd[2015]: time="2025-07-12T00:08:50.841989901Z" level=info msg="TearDown network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" successfully" Jul 12 00:08:50.845894 containerd[2015]: time="2025-07-12T00:08:50.842030473Z" level=info msg="StopPodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" returns successfully" Jul 12 00:08:50.855761 containerd[2015]: time="2025-07-12T00:08:50.853321729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5c86,Uid:0725903c-a273-456a-a2eb-c24032ec4754,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:50.856070 systemd[1]: run-netns-cni\x2db7c519ab\x2db6bd\x2dbc5e\x2d29da\x2d0b1061cb5ca4.mount: Deactivated successfully. Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.726 [INFO][5549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.727 [INFO][5549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" iface="eth0" netns="/var/run/netns/cni-7937fbe2-bbee-f95e-bed2-f9c15c9321ed" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.730 [INFO][5549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" iface="eth0" netns="/var/run/netns/cni-7937fbe2-bbee-f95e-bed2-f9c15c9321ed" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.730 [INFO][5549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" iface="eth0" netns="/var/run/netns/cni-7937fbe2-bbee-f95e-bed2-f9c15c9321ed" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.730 [INFO][5549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.731 [INFO][5549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.819 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.819 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.834 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.852 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.853 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.865 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:50.876066 containerd[2015]: 2025-07-12 00:08:50.872 [INFO][5549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:50.882017 containerd[2015]: time="2025-07-12T00:08:50.879233821Z" level=info msg="TearDown network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" successfully" Jul 12 00:08:50.882017 containerd[2015]: time="2025-07-12T00:08:50.879279241Z" level=info msg="StopPodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" returns successfully" Jul 12 00:08:50.888639 containerd[2015]: time="2025-07-12T00:08:50.886631437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-748mg,Uid:955d28b9-9d88-48e7-9db2-62374412839c,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:50.891290 systemd[1]: run-netns-cni\x2d7937fbe2\x2dbbee\x2df95e\x2dbed2\x2df9c15c9321ed.mount: Deactivated successfully. Jul 12 00:08:51.208663 systemd-networkd[1936]: califf020edca4b: Link UP Jul 12 00:08:51.209155 systemd-networkd[1936]: califf020edca4b: Gained carrier Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.050 [INFO][5585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0 csi-node-driver- calico-system 955d28b9-9d88-48e7-9db2-62374412839c 1025 0 2025-07-12 00:08:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-25 csi-node-driver-748mg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califf020edca4b [] [] }} ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.050 [INFO][5585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.110 [INFO][5605] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" HandleID="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.111 [INFO][5605] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" HandleID="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-25", "pod":"csi-node-driver-748mg", "timestamp":"2025-07-12 00:08:51.11090407 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.111 [INFO][5605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.111 [INFO][5605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.111 [INFO][5605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.129 [INFO][5605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.141 [INFO][5605] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.151 [INFO][5605] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.155 [INFO][5605] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.161 [INFO][5605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.161 [INFO][5605] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.164 [INFO][5605] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5 Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.175 [INFO][5605] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.187 [INFO][5605] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.200/26] block=192.168.56.192/26 handle="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.188 [INFO][5605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.200/26] handle="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" host="ip-172-31-18-25" Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.188 [INFO][5605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:51.274861 containerd[2015]: 2025-07-12 00:08:51.188 [INFO][5605] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.200/26] IPv6=[] ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" HandleID="k8s-pod-network.6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.278387 containerd[2015]: 2025-07-12 00:08:51.195 [INFO][5585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"955d28b9-9d88-48e7-9db2-62374412839c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"csi-node-driver-748mg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califf020edca4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:51.278387 containerd[2015]: 2025-07-12 00:08:51.196 [INFO][5585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.200/32] ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.278387 containerd[2015]: 2025-07-12 00:08:51.196 [INFO][5585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf020edca4b ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.278387 containerd[2015]: 2025-07-12 00:08:51.207 [INFO][5585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.278387 containerd[2015]: 2025-07-12 00:08:51.208 [INFO][5585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"955d28b9-9d88-48e7-9db2-62374412839c", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5", Pod:"csi-node-driver-748mg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califf020edca4b", MAC:"2e:ef:ac:5e:6a:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:51.278387 containerd[2015]: 2025-07-12 00:08:51.250 [INFO][5585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5" Namespace="calico-system" Pod="csi-node-driver-748mg" WorkloadEndpoint="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:51.350380 systemd-networkd[1936]: calif45ae8c1942: Gained IPv6LL Jul 12 00:08:51.353648 containerd[2015]: time="2025-07-12T00:08:51.351148272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:51.353648 containerd[2015]: time="2025-07-12T00:08:51.351257868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:51.353648 containerd[2015]: time="2025-07-12T00:08:51.351294324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:51.353648 containerd[2015]: time="2025-07-12T00:08:51.351512736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:51.380482 systemd-networkd[1936]: calid9a6f49fbc6: Link UP Jul 12 00:08:51.386038 systemd-networkd[1936]: calid9a6f49fbc6: Gained carrier Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.041 [INFO][5576] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0 coredns-668d6bf9bc- kube-system 0725903c-a273-456a-a2eb-c24032ec4754 1026 0 2025-07-12 00:07:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-25 coredns-668d6bf9bc-z5c86 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9a6f49fbc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.041 [INFO][5576] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.114 [INFO][5600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" HandleID="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.115 [INFO][5600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" HandleID="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d880), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-25", "pod":"coredns-668d6bf9bc-z5c86", "timestamp":"2025-07-12 00:08:51.11435095 +0000 UTC"}, Hostname:"ip-172-31-18-25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.115 [INFO][5600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.192 [INFO][5600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.192 [INFO][5600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-25' Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.235 [INFO][5600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.276 [INFO][5600] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.298 [INFO][5600] ipam/ipam.go 511: Trying affinity for 192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.306 [INFO][5600] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.316 [INFO][5600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.192/26 host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.317 [INFO][5600] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.56.192/26 handle="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.322 [INFO][5600] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.335 [INFO][5600] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.56.192/26 handle="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.368 [INFO][5600] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.56.201/26] block=192.168.56.192/26 handle="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.368 [INFO][5600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.201/26] handle="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" host="ip-172-31-18-25" Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.368 [INFO][5600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:51.430843 containerd[2015]: 2025-07-12 00:08:51.369 [INFO][5600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.201/26] IPv6=[] ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" HandleID="k8s-pod-network.1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.439429 containerd[2015]: 2025-07-12 00:08:51.374 [INFO][5576] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0725903c-a273-456a-a2eb-c24032ec4754", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"", Pod:"coredns-668d6bf9bc-z5c86", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9a6f49fbc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:51.439429 containerd[2015]: 2025-07-12 00:08:51.375 [INFO][5576] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.201/32] ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.439429 containerd[2015]: 2025-07-12 00:08:51.375 [INFO][5576] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9a6f49fbc6 ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.439429 containerd[2015]: 2025-07-12 00:08:51.388 [INFO][5576] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.439429 containerd[2015]: 2025-07-12 00:08:51.388 [INFO][5576] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0725903c-a273-456a-a2eb-c24032ec4754", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d", Pod:"coredns-668d6bf9bc-z5c86", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9a6f49fbc6", MAC:"b6:5a:bf:d2:9d:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:51.439429 containerd[2015]: 2025-07-12 00:08:51.419 [INFO][5576] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5c86" WorkloadEndpoint="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:51.433397 systemd[1]: Started cri-containerd-6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5.scope - libcontainer container 6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5. Jul 12 00:08:51.536920 containerd[2015]: time="2025-07-12T00:08:51.536204916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:51.536920 containerd[2015]: time="2025-07-12T00:08:51.536344212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:51.536920 containerd[2015]: time="2025-07-12T00:08:51.536392668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:51.536920 containerd[2015]: time="2025-07-12T00:08:51.536611884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:51.577710 containerd[2015]: time="2025-07-12T00:08:51.576256465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-748mg,Uid:955d28b9-9d88-48e7-9db2-62374412839c,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5\"" Jul 12 00:08:51.589343 systemd[1]: Started cri-containerd-1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d.scope - libcontainer container 1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d. Jul 12 00:08:51.669647 containerd[2015]: time="2025-07-12T00:08:51.669553969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5c86,Uid:0725903c-a273-456a-a2eb-c24032ec4754,Namespace:kube-system,Attempt:1,} returns sandbox id \"1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d\"" Jul 12 00:08:51.678169 containerd[2015]: time="2025-07-12T00:08:51.678091621Z" level=info msg="CreateContainer within sandbox \"1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:51.704316 containerd[2015]: time="2025-07-12T00:08:51.704231833Z" level=info msg="CreateContainer within sandbox \"1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"470398d11660c412a2cc1d41545a7e170b55d48447c63324db7da179c126ad2e\"" Jul 12 00:08:51.707346 containerd[2015]: time="2025-07-12T00:08:51.706664545Z" level=info msg="StartContainer for \"470398d11660c412a2cc1d41545a7e170b55d48447c63324db7da179c126ad2e\"" Jul 12 00:08:51.828182 systemd[1]: Started cri-containerd-470398d11660c412a2cc1d41545a7e170b55d48447c63324db7da179c126ad2e.scope - libcontainer container 470398d11660c412a2cc1d41545a7e170b55d48447c63324db7da179c126ad2e. Jul 12 00:08:51.952346 containerd[2015]: time="2025-07-12T00:08:51.952244235Z" level=info msg="StartContainer for \"470398d11660c412a2cc1d41545a7e170b55d48447c63324db7da179c126ad2e\" returns successfully" Jul 12 00:08:52.375681 systemd-networkd[1936]: califf020edca4b: Gained IPv6LL Jul 12 00:08:52.462875 kubelet[3344]: I0712 00:08:52.462773 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z5c86" podStartSLOduration=53.462747673 podStartE2EDuration="53.462747673s" podCreationTimestamp="2025-07-12 00:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:52.456972049 +0000 UTC m=+57.163813761" watchObservedRunningTime="2025-07-12 00:08:52.462747673 +0000 UTC m=+57.169589349" Jul 12 00:08:52.871805 systemd[1]: Started sshd@12-172.31.18.25:22-139.178.89.65:53786.service - OpenSSH per-connection server daemon (139.178.89.65:53786). Jul 12 00:08:52.887071 systemd-networkd[1936]: calid9a6f49fbc6: Gained IPv6LL Jul 12 00:08:53.091539 sshd[5762]: Accepted publickey for core from 139.178.89.65 port 53786 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:53.097236 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:53.114500 systemd-logind[1993]: New session 10 of user core. Jul 12 00:08:53.123772 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:08:53.460041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595315012.mount: Deactivated successfully. Jul 12 00:08:53.477665 sshd[5762]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:53.486949 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:08:53.488086 systemd[1]: sshd@12-172.31.18.25:22-139.178.89.65:53786.service: Deactivated successfully. Jul 12 00:08:53.496538 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:08:53.505529 systemd-logind[1993]: Removed session 10. Jul 12 00:08:54.468230 containerd[2015]: time="2025-07-12T00:08:54.465587523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:54.473178 containerd[2015]: time="2025-07-12T00:08:54.473120919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:08:54.476851 containerd[2015]: time="2025-07-12T00:08:54.476207931Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:54.487720 containerd[2015]: time="2025-07-12T00:08:54.487526043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:54.492005 containerd[2015]: time="2025-07-12T00:08:54.491822883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 5.924635997s" Jul 12 00:08:54.492573 containerd[2015]: time="2025-07-12T00:08:54.492225411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:08:54.496562 containerd[2015]: time="2025-07-12T00:08:54.496117875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:08:54.501076 containerd[2015]: time="2025-07-12T00:08:54.500787423Z" level=info msg="CreateContainer within sandbox \"8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:08:54.556363 containerd[2015]: time="2025-07-12T00:08:54.556183023Z" level=info msg="CreateContainer within sandbox \"8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"33a722eb3d3e21a541b88d790282c029050a5e41ecd0fe980dae97340ca2483b\"" Jul 12 00:08:54.559532 containerd[2015]: time="2025-07-12T00:08:54.559163919Z" level=info msg="StartContainer for \"33a722eb3d3e21a541b88d790282c029050a5e41ecd0fe980dae97340ca2483b\"" Jul 12 00:08:54.676786 systemd[1]: Started cri-containerd-33a722eb3d3e21a541b88d790282c029050a5e41ecd0fe980dae97340ca2483b.scope - libcontainer container 33a722eb3d3e21a541b88d790282c029050a5e41ecd0fe980dae97340ca2483b. Jul 12 00:08:54.805715 containerd[2015]: time="2025-07-12T00:08:54.804903041Z" level=info msg="StartContainer for \"33a722eb3d3e21a541b88d790282c029050a5e41ecd0fe980dae97340ca2483b\" returns successfully" Jul 12 00:08:55.267124 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.56.192:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.56.192:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 9 cali5bd57d8cf99 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 10 cali00c9a9d485a [fe80::ecee:eeff:feee:eeee%5]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 11 vxlan.calico [fe80::6447:37ff:fef1:c5db%6]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 12 cali86846d2c1a9 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 13 calid5974a7b5b9 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 14 calic7c9f353034 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 15 calidd3f6ed32e2 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 16 calif45ae8c1942 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 17 califf020edca4b [fe80::ecee:eeff:feee:eeee%14]:123 Jul 12 00:08:55.268760 ntpd[1986]: 12 Jul 00:08:55 ntpd[1986]: Listen normally on 18 calid9a6f49fbc6 [fe80::ecee:eeff:feee:eeee%15]:123 Jul 12 00:08:55.267277 ntpd[1986]: Listen normally on 9 cali5bd57d8cf99 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 12 00:08:55.267370 ntpd[1986]: Listen normally on 10 cali00c9a9d485a [fe80::ecee:eeff:feee:eeee%5]:123 Jul 12 00:08:55.267441 ntpd[1986]: Listen normally on 11 vxlan.calico [fe80::6447:37ff:fef1:c5db%6]:123 Jul 12 00:08:55.267605 ntpd[1986]: Listen normally on 12 cali86846d2c1a9 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 12 00:08:55.267674 ntpd[1986]: Listen normally on 13 calid5974a7b5b9 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 12 00:08:55.267745 ntpd[1986]: Listen normally on 14 calic7c9f353034 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 12 00:08:55.267810 ntpd[1986]: Listen normally on 15 calidd3f6ed32e2 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 12 00:08:55.267884 ntpd[1986]: Listen normally on 16 calif45ae8c1942 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 12 00:08:55.267949 ntpd[1986]: Listen normally on 17 califf020edca4b [fe80::ecee:eeff:feee:eeee%14]:123 Jul 12 00:08:55.268013 ntpd[1986]: Listen normally on 18 calid9a6f49fbc6 [fe80::ecee:eeff:feee:eeee%15]:123 Jul 12 00:08:55.378100 kubelet[3344]: I0712 00:08:55.376550 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-vcjmb" podStartSLOduration=23.952760231 podStartE2EDuration="31.37652398s" podCreationTimestamp="2025-07-12 00:08:24 +0000 UTC" firstStartedPulling="2025-07-12 00:08:47.072070806 +0000 UTC m=+51.778912494" lastFinishedPulling="2025-07-12 00:08:54.495834543 +0000 UTC m=+59.202676243" observedRunningTime="2025-07-12 00:08:55.375294388 +0000 UTC m=+60.082136616" watchObservedRunningTime="2025-07-12 00:08:55.37652398 +0000 UTC m=+60.083365680" Jul 12 00:08:55.647835 containerd[2015]: time="2025-07-12T00:08:55.647743961Z" level=info msg="StopPodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\"" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.774 [WARNING][5860] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"955d28b9-9d88-48e7-9db2-62374412839c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5", Pod:"csi-node-driver-748mg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califf020edca4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.776 [INFO][5860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.776 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" iface="eth0" netns="" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.777 [INFO][5860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.777 [INFO][5860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.898 [INFO][5869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.900 [INFO][5869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.900 [INFO][5869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.928 [WARNING][5869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.928 [INFO][5869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.936 [INFO][5869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:55.951254 containerd[2015]: 2025-07-12 00:08:55.943 [INFO][5860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:55.953902 containerd[2015]: time="2025-07-12T00:08:55.952541946Z" level=info msg="TearDown network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" successfully" Jul 12 00:08:55.953902 containerd[2015]: time="2025-07-12T00:08:55.952605582Z" level=info msg="StopPodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" returns successfully" Jul 12 00:08:55.953902 containerd[2015]: time="2025-07-12T00:08:55.953355606Z" level=info msg="RemovePodSandbox for \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\"" Jul 12 00:08:55.953902 containerd[2015]: time="2025-07-12T00:08:55.953414250Z" level=info msg="Forcibly stopping sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\"" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.103 [WARNING][5884] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"955d28b9-9d88-48e7-9db2-62374412839c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5", Pod:"csi-node-driver-748mg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califf020edca4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.104 [INFO][5884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.104 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" iface="eth0" netns="" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.104 [INFO][5884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.104 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.191 [INFO][5892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.191 [INFO][5892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.191 [INFO][5892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.220 [WARNING][5892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.220 [INFO][5892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" HandleID="k8s-pod-network.46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Workload="ip--172--31--18--25-k8s-csi--node--driver--748mg-eth0" Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.223 [INFO][5892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:56.238155 containerd[2015]: 2025-07-12 00:08:56.226 [INFO][5884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22" Jul 12 00:08:56.238155 containerd[2015]: time="2025-07-12T00:08:56.237886708Z" level=info msg="TearDown network for sandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" successfully" Jul 12 00:08:56.252527 containerd[2015]: time="2025-07-12T00:08:56.252405352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:56.253192 containerd[2015]: time="2025-07-12T00:08:56.252544504Z" level=info msg="RemovePodSandbox \"46caeb3e5e6341817859a71a61e744f41f67ff12067207ea609d61d14b570d22\" returns successfully" Jul 12 00:08:56.254188 containerd[2015]: time="2025-07-12T00:08:56.254111188Z" level=info msg="StopPodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\"" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.431 [WARNING][5913] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0725903c-a273-456a-a2eb-c24032ec4754", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d", Pod:"coredns-668d6bf9bc-z5c86", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9a6f49fbc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.431 [INFO][5913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.431 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" iface="eth0" netns="" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.432 [INFO][5913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.432 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.532 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.532 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.532 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.554 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.554 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.558 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:56.577691 containerd[2015]: 2025-07-12 00:08:56.565 [INFO][5913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.579053 containerd[2015]: time="2025-07-12T00:08:56.578514857Z" level=info msg="TearDown network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" successfully" Jul 12 00:08:56.579053 containerd[2015]: time="2025-07-12T00:08:56.578560505Z" level=info msg="StopPodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" returns successfully" Jul 12 00:08:56.580551 containerd[2015]: time="2025-07-12T00:08:56.579686513Z" level=info msg="RemovePodSandbox for \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\"" Jul 12 00:08:56.580551 containerd[2015]: time="2025-07-12T00:08:56.579742901Z" level=info msg="Forcibly stopping sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\"" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.740 [WARNING][5957] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0725903c-a273-456a-a2eb-c24032ec4754", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"1646c37f6f4e619232a8720fd2f22eeb4efb8d6a5dd882676f96c0e7c894cf4d", Pod:"coredns-668d6bf9bc-z5c86", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9a6f49fbc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.742 [INFO][5957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.743 [INFO][5957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" iface="eth0" netns="" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.743 [INFO][5957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.743 [INFO][5957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.808 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.808 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.808 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.833 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.833 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" HandleID="k8s-pod-network.eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--z5c86-eth0" Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.836 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:56.844222 containerd[2015]: 2025-07-12 00:08:56.838 [INFO][5957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f" Jul 12 00:08:56.846101 containerd[2015]: time="2025-07-12T00:08:56.844187911Z" level=info msg="TearDown network for sandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" successfully" Jul 12 00:08:56.856260 containerd[2015]: time="2025-07-12T00:08:56.855875383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:56.857176 containerd[2015]: time="2025-07-12T00:08:56.856322923Z" level=info msg="RemovePodSandbox \"eeb70dec631aad994c1d070537a739aeb6b170435f4b0198bc4d04b24f449e3f\" returns successfully" Jul 12 00:08:56.858561 containerd[2015]: time="2025-07-12T00:08:56.858430099Z" level=info msg="StopPodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\"" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:56.949 [WARNING][5982] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0", GenerateName:"calico-kube-controllers-596f7dcbbd-", Namespace:"calico-system", SelfLink:"", UID:"32232079-fc02-426e-a296-066d8c1e6445", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596f7dcbbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191", Pod:"calico-kube-controllers-596f7dcbbd-zlhwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5974a7b5b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:56.952 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:56.952 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" iface="eth0" netns="" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:56.952 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:56.952 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.002 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.003 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.003 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.022 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.022 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.025 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:57.033338 containerd[2015]: 2025-07-12 00:08:57.028 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.033338 containerd[2015]: time="2025-07-12T00:08:57.033173812Z" level=info msg="TearDown network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" successfully" Jul 12 00:08:57.033338 containerd[2015]: time="2025-07-12T00:08:57.033210340Z" level=info msg="StopPodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" returns successfully" Jul 12 00:08:57.035699 containerd[2015]: time="2025-07-12T00:08:57.035055844Z" level=info msg="RemovePodSandbox for \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\"" Jul 12 00:08:57.035699 containerd[2015]: time="2025-07-12T00:08:57.035113564Z" level=info msg="Forcibly stopping sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\"" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.116 [WARNING][6004] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0", GenerateName:"calico-kube-controllers-596f7dcbbd-", Namespace:"calico-system", SelfLink:"", UID:"32232079-fc02-426e-a296-066d8c1e6445", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596f7dcbbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191", Pod:"calico-kube-controllers-596f7dcbbd-zlhwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5974a7b5b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.116 [INFO][6004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.116 [INFO][6004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" iface="eth0" netns="" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.116 [INFO][6004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.116 [INFO][6004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.189 [INFO][6011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.190 [INFO][6011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.190 [INFO][6011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.206 [WARNING][6011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.206 [INFO][6011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" HandleID="k8s-pod-network.b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Workload="ip--172--31--18--25-k8s-calico--kube--controllers--596f7dcbbd--zlhwz-eth0" Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.214 [INFO][6011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:57.232482 containerd[2015]: 2025-07-12 00:08:57.223 [INFO][6004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b" Jul 12 00:08:57.234969 containerd[2015]: time="2025-07-12T00:08:57.234665165Z" level=info msg="TearDown network for sandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" successfully" Jul 12 00:08:57.247566 containerd[2015]: time="2025-07-12T00:08:57.247335101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:57.247566 containerd[2015]: time="2025-07-12T00:08:57.247468625Z" level=info msg="RemovePodSandbox \"b84d7f5ec0f7ef176cf7ab21997dd4345aa8f16c1704ed9c959131b2ee7ae27b\" returns successfully" Jul 12 00:08:57.250821 containerd[2015]: time="2025-07-12T00:08:57.250144277Z" level=info msg="StopPodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\"" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.384 [WARNING][6026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0", GenerateName:"calico-apiserver-55ff68f59d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55ff68f59d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7", Pod:"calico-apiserver-55ff68f59d-twxd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7c9f353034", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.385 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.385 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" iface="eth0" netns="" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.385 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.385 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.459 [INFO][6033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.460 [INFO][6033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.460 [INFO][6033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.490 [WARNING][6033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.490 [INFO][6033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.495 [INFO][6033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:57.510959 containerd[2015]: 2025-07-12 00:08:57.501 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.510959 containerd[2015]: time="2025-07-12T00:08:57.510769086Z" level=info msg="TearDown network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" successfully" Jul 12 00:08:57.510959 containerd[2015]: time="2025-07-12T00:08:57.510806610Z" level=info msg="StopPodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" returns successfully" Jul 12 00:08:57.515147 containerd[2015]: time="2025-07-12T00:08:57.512684358Z" level=info msg="RemovePodSandbox for \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\"" Jul 12 00:08:57.515147 containerd[2015]: time="2025-07-12T00:08:57.512737410Z" level=info msg="Forcibly stopping sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\"" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.669 [WARNING][6048] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0", GenerateName:"calico-apiserver-55ff68f59d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2fe5d4f-2f51-400a-adc0-e4e9bcdc9e97", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55ff68f59d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7", Pod:"calico-apiserver-55ff68f59d-twxd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7c9f353034", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.671 [INFO][6048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.671 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" iface="eth0" netns="" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.671 [INFO][6048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.672 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.791 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.791 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.791 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.828 [WARNING][6055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.828 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" HandleID="k8s-pod-network.4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Workload="ip--172--31--18--25-k8s-calico--apiserver--55ff68f59d--twxd6-eth0" Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.833 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:57.852282 containerd[2015]: 2025-07-12 00:08:57.842 [INFO][6048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf" Jul 12 00:08:57.855503 containerd[2015]: time="2025-07-12T00:08:57.854157536Z" level=info msg="TearDown network for sandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" successfully" Jul 12 00:08:57.867509 containerd[2015]: time="2025-07-12T00:08:57.867324836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:57.867509 containerd[2015]: time="2025-07-12T00:08:57.867426392Z" level=info msg="RemovePodSandbox \"4c518dc947f49e8daa799b4b08e6cb7aa17da40b5603d76898d0598f8a7c4edf\" returns successfully" Jul 12 00:08:57.869525 containerd[2015]: time="2025-07-12T00:08:57.869052152Z" level=info msg="StopPodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\"" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:57.991 [WARNING][6070] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ba9bbb6e-9361-4175-929c-1ae629fa9bce", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44", Pod:"goldmane-768f4c5c69-vcjmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00c9a9d485a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:57.992 [INFO][6070] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:57.992 [INFO][6070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" iface="eth0" netns="" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:57.992 [INFO][6070] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:57.992 [INFO][6070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.062 [INFO][6077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.071 [INFO][6077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.071 [INFO][6077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.095 [WARNING][6077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.095 [INFO][6077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.098 [INFO][6077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:58.108152 containerd[2015]: 2025-07-12 00:08:58.102 [INFO][6070] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.110616 containerd[2015]: time="2025-07-12T00:08:58.109206413Z" level=info msg="TearDown network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" successfully" Jul 12 00:08:58.110616 containerd[2015]: time="2025-07-12T00:08:58.109968293Z" level=info msg="StopPodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" returns successfully" Jul 12 00:08:58.111982 containerd[2015]: time="2025-07-12T00:08:58.111833789Z" level=info msg="RemovePodSandbox for \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\"" Jul 12 00:08:58.111982 containerd[2015]: time="2025-07-12T00:08:58.111922229Z" level=info msg="Forcibly stopping sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\"" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.225 [WARNING][6092] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ba9bbb6e-9361-4175-929c-1ae629fa9bce", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"8815bec3d11ce7c46bc60425ebbcbc68a33512a90e065b8ae99143a98b27af44", Pod:"goldmane-768f4c5c69-vcjmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00c9a9d485a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.226 [INFO][6092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.226 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" iface="eth0" netns="" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.226 [INFO][6092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.226 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.278 [INFO][6100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.278 [INFO][6100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.279 [INFO][6100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.296 [WARNING][6100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.296 [INFO][6100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" HandleID="k8s-pod-network.7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Workload="ip--172--31--18--25-k8s-goldmane--768f4c5c69--vcjmb-eth0" Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.300 [INFO][6100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:58.310685 containerd[2015]: 2025-07-12 00:08:58.305 [INFO][6092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6" Jul 12 00:08:58.312859 containerd[2015]: time="2025-07-12T00:08:58.310835418Z" level=info msg="TearDown network for sandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" successfully" Jul 12 00:08:58.326124 containerd[2015]: time="2025-07-12T00:08:58.326061702Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:58.327034 containerd[2015]: time="2025-07-12T00:08:58.326984190Z" level=info msg="RemovePodSandbox \"7bab3d2be81f573d25e94c5f63b282b35bc5e6df378a55116789685cb2672de6\" returns successfully" Jul 12 00:08:58.334731 containerd[2015]: time="2025-07-12T00:08:58.334031286Z" level=info msg="StopPodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\"" Jul 12 00:08:58.527559 containerd[2015]: time="2025-07-12T00:08:58.527317807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:58.532007 systemd[1]: Started sshd@13-172.31.18.25:22-139.178.89.65:53802.service - OpenSSH per-connection server daemon (139.178.89.65:53802). Jul 12 00:08:58.540846 containerd[2015]: time="2025-07-12T00:08:58.540333379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:08:58.549341 containerd[2015]: time="2025-07-12T00:08:58.548113555Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.448 [WARNING][6114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9", Pod:"coredns-668d6bf9bc-hscg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86846d2c1a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.448 [INFO][6114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.448 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" iface="eth0" netns="" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.448 [INFO][6114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.448 [INFO][6114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.512 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.513 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.513 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.540 [WARNING][6122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.541 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.546 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:58.563040 containerd[2015]: 2025-07-12 00:08:58.555 [INFO][6114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.563938 containerd[2015]: time="2025-07-12T00:08:58.563080075Z" level=info msg="TearDown network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" successfully" Jul 12 00:08:58.563938 containerd[2015]: time="2025-07-12T00:08:58.563121727Z" level=info msg="StopPodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" returns successfully" Jul 12 00:08:58.565817 containerd[2015]: time="2025-07-12T00:08:58.565739959Z" level=info msg="RemovePodSandbox for \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\"" Jul 12 00:08:58.565817 containerd[2015]: time="2025-07-12T00:08:58.565810567Z" level=info msg="Forcibly stopping sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\"" Jul 12 00:08:58.569293 containerd[2015]: time="2025-07-12T00:08:58.569226487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:58.571261 containerd[2015]: time="2025-07-12T00:08:58.571084855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.074905192s" Jul 12 00:08:58.571261 containerd[2015]: time="2025-07-12T00:08:58.571151875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:08:58.575076 containerd[2015]: time="2025-07-12T00:08:58.574499719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:08:58.631212 containerd[2015]: time="2025-07-12T00:08:58.629382992Z" level=info msg="CreateContainer within sandbox \"9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:08:58.700087 containerd[2015]: time="2025-07-12T00:08:58.700025108Z" level=info msg="CreateContainer within sandbox \"9b2c1b53aee8d9b945ed53c2df9d4512fe04a50aaab0d8ea6186d69f468ff191\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055\"" Jul 12 00:08:58.701756 containerd[2015]: time="2025-07-12T00:08:58.701692556Z" level=info msg="StartContainer for \"bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055\"" Jul 12 00:08:58.766000 sshd[6129]: Accepted publickey for core from 139.178.89.65 port 53802 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:58.771581 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:58.794235 systemd-logind[1993]: New session 11 of user core. Jul 12 00:08:58.801736 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:08:58.888831 systemd[1]: Started cri-containerd-bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055.scope - libcontainer container bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055. Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.726 [WARNING][6141] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"751d0e7c-ad8a-4efe-bafc-24b1a7d7ef96", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"3c4cd2036365096cab57f58c44a3400ee78ca2c5ed4f536d771450c05895dbe9", Pod:"coredns-668d6bf9bc-hscg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86846d2c1a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.728 [INFO][6141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.728 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" iface="eth0" netns="" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.728 [INFO][6141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.728 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.857 [INFO][6149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.858 [INFO][6149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.858 [INFO][6149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.886 [WARNING][6149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.886 [INFO][6149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" HandleID="k8s-pod-network.815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Workload="ip--172--31--18--25-k8s-coredns--668d6bf9bc--hscg5-eth0" Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.891 [INFO][6149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:58.901001 containerd[2015]: 2025-07-12 00:08:58.897 [INFO][6141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295" Jul 12 00:08:58.904418 containerd[2015]: time="2025-07-12T00:08:58.901042317Z" level=info msg="TearDown network for sandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" successfully" Jul 12 00:08:58.912267 containerd[2015]: time="2025-07-12T00:08:58.912145761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:58.912579 containerd[2015]: time="2025-07-12T00:08:58.912303057Z" level=info msg="RemovePodSandbox \"815ac5bb3c5af9824d2690e00c676ad609b79750f007b22af2c714c26c974295\" returns successfully" Jul 12 00:08:58.913846 containerd[2015]: time="2025-07-12T00:08:58.913788201Z" level=info msg="StopPodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\"" Jul 12 00:08:59.041402 containerd[2015]: time="2025-07-12T00:08:59.040533726Z" level=info msg="StartContainer for \"bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055\" returns successfully" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.083 [WARNING][6186] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.085 [INFO][6186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.085 [INFO][6186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" iface="eth0" netns="" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.085 [INFO][6186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.085 [INFO][6186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.193 [INFO][6210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.194 [INFO][6210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.194 [INFO][6210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.222 [WARNING][6210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.223 [INFO][6210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.227 [INFO][6210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:59.236888 containerd[2015]: 2025-07-12 00:08:59.232 [INFO][6186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.239854 containerd[2015]: time="2025-07-12T00:08:59.236900683Z" level=info msg="TearDown network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" successfully" Jul 12 00:08:59.239854 containerd[2015]: time="2025-07-12T00:08:59.236939239Z" level=info msg="StopPodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" returns successfully" Jul 12 00:08:59.239854 containerd[2015]: time="2025-07-12T00:08:59.237917299Z" level=info msg="RemovePodSandbox for \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\"" Jul 12 00:08:59.239854 containerd[2015]: time="2025-07-12T00:08:59.237972883Z" level=info msg="Forcibly stopping sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\"" Jul 12 00:08:59.244813 sshd[6129]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:59.263290 systemd[1]: sshd@13-172.31.18.25:22-139.178.89.65:53802.service: Deactivated successfully. Jul 12 00:08:59.273840 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:08:59.278624 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:08:59.283404 systemd-logind[1993]: Removed session 11. Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.362 [WARNING][6239] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" WorkloadEndpoint="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.362 [INFO][6239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.362 [INFO][6239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" iface="eth0" netns="" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.362 [INFO][6239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.362 [INFO][6239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.437 [INFO][6248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.437 [INFO][6248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.439 [INFO][6248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.475 [WARNING][6248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.475 [INFO][6248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" HandleID="k8s-pod-network.3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Workload="ip--172--31--18--25-k8s-whisker--98d66569b--k8l5m-eth0" Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.480 [INFO][6248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:59.503858 containerd[2015]: 2025-07-12 00:08:59.489 [INFO][6239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5" Jul 12 00:08:59.503858 containerd[2015]: time="2025-07-12T00:08:59.502696028Z" level=info msg="TearDown network for sandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" successfully" Jul 12 00:08:59.516866 containerd[2015]: time="2025-07-12T00:08:59.516165476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:59.516866 containerd[2015]: time="2025-07-12T00:08:59.516270920Z" level=info msg="RemovePodSandbox \"3060ce9fd7bedb6252caeb937ad12c0ddbe4961af40dc799144bbf27a1b035d5\" returns successfully" Jul 12 00:08:59.518564 containerd[2015]: time="2025-07-12T00:08:59.518144972Z" level=info msg="StopPodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\"" Jul 12 00:08:59.672790 kubelet[3344]: I0712 00:08:59.671767 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-596f7dcbbd-zlhwz" podStartSLOduration=26.476816388 podStartE2EDuration="35.671742657s" podCreationTimestamp="2025-07-12 00:08:24 +0000 UTC" firstStartedPulling="2025-07-12 00:08:49.377960518 +0000 UTC m=+54.084802206" lastFinishedPulling="2025-07-12 00:08:58.572886799 +0000 UTC m=+63.279728475" observedRunningTime="2025-07-12 00:08:59.465727172 +0000 UTC m=+64.172568860" watchObservedRunningTime="2025-07-12 00:08:59.671742657 +0000 UTC m=+64.378584333" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.642 [WARNING][6282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e", Pod:"calico-apiserver-f6d8df55-xspm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif45ae8c1942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.642 [INFO][6282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.642 [INFO][6282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" iface="eth0" netns="" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.642 [INFO][6282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.642 [INFO][6282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.711 [INFO][6293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.711 [INFO][6293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.711 [INFO][6293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.732 [WARNING][6293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.732 [INFO][6293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.735 [INFO][6293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:59.742199 containerd[2015]: 2025-07-12 00:08:59.738 [INFO][6282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.743108 containerd[2015]: time="2025-07-12T00:08:59.742373313Z" level=info msg="TearDown network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" successfully" Jul 12 00:08:59.743108 containerd[2015]: time="2025-07-12T00:08:59.742600209Z" level=info msg="StopPodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" returns successfully" Jul 12 00:08:59.743906 containerd[2015]: time="2025-07-12T00:08:59.743544477Z" level=info msg="RemovePodSandbox for \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\"" Jul 12 00:08:59.743906 containerd[2015]: time="2025-07-12T00:08:59.743598885Z" level=info msg="Forcibly stopping sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\"" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.819 [WARNING][6308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e", Pod:"calico-apiserver-f6d8df55-xspm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif45ae8c1942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.820 [INFO][6308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.820 [INFO][6308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" iface="eth0" netns="" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.820 [INFO][6308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.820 [INFO][6308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.873 [INFO][6315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.874 [INFO][6315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.874 [INFO][6315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.887 [WARNING][6315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.887 [INFO][6315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" HandleID="k8s-pod-network.d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.890 [INFO][6315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:59.895704 containerd[2015]: 2025-07-12 00:08:59.893 [INFO][6308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213" Jul 12 00:08:59.896754 containerd[2015]: time="2025-07-12T00:08:59.896577838Z" level=info msg="TearDown network for sandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" successfully" Jul 12 00:08:59.903992 containerd[2015]: time="2025-07-12T00:08:59.903911062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:59.904978 containerd[2015]: time="2025-07-12T00:08:59.904009486Z" level=info msg="RemovePodSandbox \"d283e721680536faf7da0489e4054b0b456030c86cc63d36c27843b2a45cc213\" returns successfully" Jul 12 00:08:59.904978 containerd[2015]: time="2025-07-12T00:08:59.904652122Z" level=info msg="StopPodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\"" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:08:59.980 [WARNING][6329] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9395332f-6218-4a3b-9efb-4b6737b7fd9d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f", Pod:"calico-apiserver-f6d8df55-scvhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd3f6ed32e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:08:59.981 [INFO][6329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:08:59.981 [INFO][6329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" iface="eth0" netns="" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:08:59.981 [INFO][6329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:08:59.981 [INFO][6329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.028 [INFO][6336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.028 [INFO][6336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.028 [INFO][6336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.042 [WARNING][6336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.042 [INFO][6336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.045 [INFO][6336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:00.053085 containerd[2015]: 2025-07-12 00:09:00.048 [INFO][6329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.053085 containerd[2015]: time="2025-07-12T00:09:00.053210035Z" level=info msg="TearDown network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" successfully" Jul 12 00:09:00.053085 containerd[2015]: time="2025-07-12T00:09:00.053252407Z" level=info msg="StopPodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" returns successfully" Jul 12 00:09:00.055657 containerd[2015]: time="2025-07-12T00:09:00.054724387Z" level=info msg="RemovePodSandbox for \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\"" Jul 12 00:09:00.055657 containerd[2015]: time="2025-07-12T00:09:00.054775951Z" level=info msg="Forcibly stopping sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\"" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.119 [WARNING][6351] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0", GenerateName:"calico-apiserver-f6d8df55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9395332f-6218-4a3b-9efb-4b6737b7fd9d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6d8df55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-25", ContainerID:"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f", Pod:"calico-apiserver-f6d8df55-scvhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd3f6ed32e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.119 [INFO][6351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.119 [INFO][6351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" iface="eth0" netns="" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.119 [INFO][6351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.119 [INFO][6351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.159 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.159 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.159 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.179 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.179 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" HandleID="k8s-pod-network.42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.182 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:00.187888 containerd[2015]: 2025-07-12 00:09:00.184 [INFO][6351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d" Jul 12 00:09:00.187888 containerd[2015]: time="2025-07-12T00:09:00.187797763Z" level=info msg="TearDown network for sandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" successfully" Jul 12 00:09:00.196066 containerd[2015]: time="2025-07-12T00:09:00.195989407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:00.196297 containerd[2015]: time="2025-07-12T00:09:00.196092091Z" level=info msg="RemovePodSandbox \"42f5d6a184287c7d6a862e736b2579b893d1a582ec1dcb57a1aede86294e656d\" returns successfully" Jul 12 00:09:03.183312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106057085.mount: Deactivated successfully. Jul 12 00:09:03.215530 containerd[2015]: time="2025-07-12T00:09:03.215039050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:03.217442 containerd[2015]: time="2025-07-12T00:09:03.217323166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:09:03.220949 containerd[2015]: time="2025-07-12T00:09:03.220669642Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:03.227494 containerd[2015]: time="2025-07-12T00:09:03.227337491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:03.229998 containerd[2015]: time="2025-07-12T00:09:03.229693955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.655112516s" Jul 12 00:09:03.229998 containerd[2015]: time="2025-07-12T00:09:03.229775627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:09:03.233472 containerd[2015]: time="2025-07-12T00:09:03.232649927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:03.238213 containerd[2015]: time="2025-07-12T00:09:03.238078031Z" level=info msg="CreateContainer within sandbox \"88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:09:03.269180 containerd[2015]: time="2025-07-12T00:09:03.268630103Z" level=info msg="CreateContainer within sandbox \"88954e2e922ad028cf116821ddb3d17d1cc7ac3be4737654541fbf802817d48c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8f6a2c1de3e82d5a7777d89cf4c886fce9d4e8bc4c7cee4e9e952dc00cabe2e7\"" Jul 12 00:09:03.273525 containerd[2015]: time="2025-07-12T00:09:03.271380371Z" level=info msg="StartContainer for \"8f6a2c1de3e82d5a7777d89cf4c886fce9d4e8bc4c7cee4e9e952dc00cabe2e7\"" Jul 12 00:09:03.351861 systemd[1]: Started cri-containerd-8f6a2c1de3e82d5a7777d89cf4c886fce9d4e8bc4c7cee4e9e952dc00cabe2e7.scope - libcontainer container 8f6a2c1de3e82d5a7777d89cf4c886fce9d4e8bc4c7cee4e9e952dc00cabe2e7. Jul 12 00:09:03.427798 containerd[2015]: time="2025-07-12T00:09:03.427636440Z" level=info msg="StartContainer for \"8f6a2c1de3e82d5a7777d89cf4c886fce9d4e8bc4c7cee4e9e952dc00cabe2e7\" returns successfully" Jul 12 00:09:04.289853 systemd[1]: Started sshd@14-172.31.18.25:22-139.178.89.65:42866.service - OpenSSH per-connection server daemon (139.178.89.65:42866). Jul 12 00:09:04.519010 sshd[6437]: Accepted publickey for core from 139.178.89.65 port 42866 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:04.527645 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:04.549635 systemd-logind[1993]: New session 12 of user core. Jul 12 00:09:04.562068 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:09:04.931215 sshd[6437]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:04.942725 systemd[1]: sshd@14-172.31.18.25:22-139.178.89.65:42866.service: Deactivated successfully. Jul 12 00:09:04.949141 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:09:04.955698 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:09:04.980058 systemd[1]: Started sshd@15-172.31.18.25:22-139.178.89.65:42872.service - OpenSSH per-connection server daemon (139.178.89.65:42872). Jul 12 00:09:04.983799 systemd-logind[1993]: Removed session 12. Jul 12 00:09:05.201724 sshd[6455]: Accepted publickey for core from 139.178.89.65 port 42872 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:05.206697 sshd[6455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:05.219533 systemd-logind[1993]: New session 13 of user core. Jul 12 00:09:05.226775 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:09:05.713426 sshd[6455]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:05.734290 systemd[1]: sshd@15-172.31.18.25:22-139.178.89.65:42872.service: Deactivated successfully. Jul 12 00:09:05.747379 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:09:05.754076 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:09:05.785630 systemd[1]: Started sshd@16-172.31.18.25:22-139.178.89.65:42888.service - OpenSSH per-connection server daemon (139.178.89.65:42888). Jul 12 00:09:05.790914 systemd-logind[1993]: Removed session 13. Jul 12 00:09:05.995708 sshd[6466]: Accepted publickey for core from 139.178.89.65 port 42888 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:05.999348 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:06.014574 systemd-logind[1993]: New session 14 of user core. Jul 12 00:09:06.020813 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:09:06.391827 sshd[6466]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:06.404429 systemd[1]: sshd@16-172.31.18.25:22-139.178.89.65:42888.service: Deactivated successfully. Jul 12 00:09:06.405591 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:09:06.412472 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:09:06.417611 systemd-logind[1993]: Removed session 14. Jul 12 00:09:06.632012 containerd[2015]: time="2025-07-12T00:09:06.631918635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.634140 containerd[2015]: time="2025-07-12T00:09:06.634039275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:09:06.636804 containerd[2015]: time="2025-07-12T00:09:06.636683127Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.642702 containerd[2015]: time="2025-07-12T00:09:06.642276351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.644519 containerd[2015]: time="2025-07-12T00:09:06.644269131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.411545824s" Jul 12 00:09:06.644519 containerd[2015]: time="2025-07-12T00:09:06.644339655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:06.647491 containerd[2015]: time="2025-07-12T00:09:06.647189415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:06.651289 containerd[2015]: time="2025-07-12T00:09:06.650892976Z" level=info msg="CreateContainer within sandbox \"3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:06.680320 containerd[2015]: time="2025-07-12T00:09:06.680246572Z" level=info msg="CreateContainer within sandbox \"3d302a88f88d7458a767731d55a5d1496b472cd099f09a06c8a6e8cde3f1eaa7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87c7edd41f345d149e849a7ebbec6003913c46c528ef5720d39c38e15f4bad71\"" Jul 12 00:09:06.681345 containerd[2015]: time="2025-07-12T00:09:06.681281440Z" level=info msg="StartContainer for \"87c7edd41f345d149e849a7ebbec6003913c46c528ef5720d39c38e15f4bad71\"" Jul 12 00:09:06.771043 systemd[1]: Started cri-containerd-87c7edd41f345d149e849a7ebbec6003913c46c528ef5720d39c38e15f4bad71.scope - libcontainer container 87c7edd41f345d149e849a7ebbec6003913c46c528ef5720d39c38e15f4bad71. Jul 12 00:09:06.847493 containerd[2015]: time="2025-07-12T00:09:06.847281208Z" level=info msg="StartContainer for \"87c7edd41f345d149e849a7ebbec6003913c46c528ef5720d39c38e15f4bad71\" returns successfully" Jul 12 00:09:06.961476 containerd[2015]: time="2025-07-12T00:09:06.961262885Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:06.963635 containerd[2015]: time="2025-07-12T00:09:06.963354677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:09:06.971478 containerd[2015]: time="2025-07-12T00:09:06.971350421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 324.087002ms" Jul 12 00:09:06.971478 containerd[2015]: time="2025-07-12T00:09:06.971467193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:06.974799 containerd[2015]: time="2025-07-12T00:09:06.974703137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:09:06.975905 containerd[2015]: time="2025-07-12T00:09:06.975828485Z" level=info msg="CreateContainer within sandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:07.014799 containerd[2015]: time="2025-07-12T00:09:07.014724865Z" level=info msg="CreateContainer within sandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\"" Jul 12 00:09:07.025283 containerd[2015]: time="2025-07-12T00:09:07.025197601Z" level=info msg="StartContainer for \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\"" Jul 12 00:09:07.091210 systemd[1]: Started cri-containerd-08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae.scope - libcontainer container 08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae. Jul 12 00:09:07.210830 containerd[2015]: time="2025-07-12T00:09:07.210751046Z" level=info msg="StartContainer for \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\" returns successfully" Jul 12 00:09:07.318855 containerd[2015]: time="2025-07-12T00:09:07.315851823Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:07.325608 containerd[2015]: time="2025-07-12T00:09:07.325543131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:09:07.350625 containerd[2015]: time="2025-07-12T00:09:07.350274435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 375.482198ms" Jul 12 00:09:07.350625 containerd[2015]: time="2025-07-12T00:09:07.350347347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:09:07.358467 containerd[2015]: time="2025-07-12T00:09:07.357811335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:09:07.370065 containerd[2015]: time="2025-07-12T00:09:07.369844335Z" level=info msg="CreateContainer within sandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:09:07.399002 containerd[2015]: time="2025-07-12T00:09:07.398922135Z" level=info msg="CreateContainer within sandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\"" Jul 12 00:09:07.404484 containerd[2015]: time="2025-07-12T00:09:07.401782839Z" level=info msg="StartContainer for \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\"" Jul 12 00:09:07.480780 systemd[1]: Started cri-containerd-0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742.scope - libcontainer container 0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742. Jul 12 00:09:07.521797 kubelet[3344]: I0712 00:09:07.521681 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66dc8f8c8d-fx6zl" podStartSLOduration=5.599004395 podStartE2EDuration="23.521656384s" podCreationTimestamp="2025-07-12 00:08:44 +0000 UTC" firstStartedPulling="2025-07-12 00:08:45.309501246 +0000 UTC m=+50.016342994" lastFinishedPulling="2025-07-12 00:09:03.232153211 +0000 UTC m=+67.938994983" observedRunningTime="2025-07-12 00:09:03.47359572 +0000 UTC m=+68.180437576" watchObservedRunningTime="2025-07-12 00:09:07.521656384 +0000 UTC m=+72.228498072" Jul 12 00:09:07.568076 kubelet[3344]: I0712 00:09:07.567367 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55ff68f59d-twxd6" podStartSLOduration=30.54257214 podStartE2EDuration="47.567338224s" podCreationTimestamp="2025-07-12 00:08:20 +0000 UTC" firstStartedPulling="2025-07-12 00:08:49.621901391 +0000 UTC m=+54.328743079" lastFinishedPulling="2025-07-12 00:09:06.646667475 +0000 UTC m=+71.353509163" observedRunningTime="2025-07-12 00:09:07.566699536 +0000 UTC m=+72.273541224" watchObservedRunningTime="2025-07-12 00:09:07.567338224 +0000 UTC m=+72.274180008" Jul 12 00:09:07.572658 kubelet[3344]: I0712 00:09:07.571286 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f6d8df55-scvhw" podStartSLOduration=37.532879487 podStartE2EDuration="54.570562276s" podCreationTimestamp="2025-07-12 00:08:13 +0000 UTC" firstStartedPulling="2025-07-12 00:08:49.934991424 +0000 UTC m=+54.641833112" lastFinishedPulling="2025-07-12 00:09:06.972674129 +0000 UTC m=+71.679515901" observedRunningTime="2025-07-12 00:09:07.525809968 +0000 UTC m=+72.232651656" watchObservedRunningTime="2025-07-12 00:09:07.570562276 +0000 UTC m=+72.277403964" Jul 12 00:09:07.747072 containerd[2015]: time="2025-07-12T00:09:07.746877689Z" level=info msg="StartContainer for \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\" returns successfully" Jul 12 00:09:08.868134 containerd[2015]: time="2025-07-12T00:09:08.867562795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:08.870568 containerd[2015]: time="2025-07-12T00:09:08.870505123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:09:08.873047 containerd[2015]: time="2025-07-12T00:09:08.872938651Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:08.879693 containerd[2015]: time="2025-07-12T00:09:08.879629203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:08.884575 containerd[2015]: time="2025-07-12T00:09:08.883382227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.525482272s" Jul 12 00:09:08.884575 containerd[2015]: time="2025-07-12T00:09:08.883549063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:09:08.891203 containerd[2015]: time="2025-07-12T00:09:08.890264707Z" level=info msg="CreateContainer within sandbox \"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:09:08.947706 containerd[2015]: time="2025-07-12T00:09:08.945436843Z" level=info msg="CreateContainer within sandbox \"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8c77784f72cb0bdba997c4078af3f0a47ba5cc979f9b5205fb8fccf3ec5fdce6\"" Jul 12 00:09:08.951023 containerd[2015]: time="2025-07-12T00:09:08.949718923Z" level=info msg="StartContainer for \"8c77784f72cb0bdba997c4078af3f0a47ba5cc979f9b5205fb8fccf3ec5fdce6\"" Jul 12 00:09:08.956613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659504724.mount: Deactivated successfully. Jul 12 00:09:09.060777 systemd[1]: Started cri-containerd-8c77784f72cb0bdba997c4078af3f0a47ba5cc979f9b5205fb8fccf3ec5fdce6.scope - libcontainer container 8c77784f72cb0bdba997c4078af3f0a47ba5cc979f9b5205fb8fccf3ec5fdce6. Jul 12 00:09:09.243094 containerd[2015]: time="2025-07-12T00:09:09.242068120Z" level=info msg="StartContainer for \"8c77784f72cb0bdba997c4078af3f0a47ba5cc979f9b5205fb8fccf3ec5fdce6\" returns successfully" Jul 12 00:09:09.246486 containerd[2015]: time="2025-07-12T00:09:09.245687428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:09:09.528154 kubelet[3344]: I0712 00:09:09.527579 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:09.528882 kubelet[3344]: I0712 00:09:09.528245 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:10.564652 kubelet[3344]: I0712 00:09:10.562383 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f6d8df55-xspm9" podStartSLOduration=40.708050079 podStartE2EDuration="57.562355575s" podCreationTimestamp="2025-07-12 00:08:13 +0000 UTC" firstStartedPulling="2025-07-12 00:08:50.501813491 +0000 UTC m=+55.208655179" lastFinishedPulling="2025-07-12 00:09:07.356118999 +0000 UTC m=+72.062960675" observedRunningTime="2025-07-12 00:09:08.562005389 +0000 UTC m=+73.268847089" watchObservedRunningTime="2025-07-12 00:09:10.562355575 +0000 UTC m=+75.269197275" Jul 12 00:09:10.746680 kubelet[3344]: I0712 00:09:10.746612 3344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:09:10.750657 containerd[2015]: time="2025-07-12T00:09:10.749491160Z" level=info msg="StopContainer for \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\" with timeout 30 (s)" Jul 12 00:09:10.750657 containerd[2015]: time="2025-07-12T00:09:10.749992208Z" level=info msg="Stop container \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\" with signal terminated" Jul 12 00:09:10.930799 systemd[1]: cri-containerd-0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742.scope: Deactivated successfully. Jul 12 00:09:10.931238 systemd[1]: cri-containerd-0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742.scope: Consumed 1.261s CPU time. Jul 12 00:09:11.075767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742-rootfs.mount: Deactivated successfully. Jul 12 00:09:11.082832 containerd[2015]: time="2025-07-12T00:09:11.082355454Z" level=info msg="shim disconnected" id=0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742 namespace=k8s.io Jul 12 00:09:11.083283 containerd[2015]: time="2025-07-12T00:09:11.083131206Z" level=warning msg="cleaning up after shim disconnected" id=0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742 namespace=k8s.io Jul 12 00:09:11.083815 containerd[2015]: time="2025-07-12T00:09:11.083167374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:11.291727 containerd[2015]: time="2025-07-12T00:09:11.291564847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:11.297671 containerd[2015]: time="2025-07-12T00:09:11.297546319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:09:11.300198 containerd[2015]: time="2025-07-12T00:09:11.300103363Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:11.309694 containerd[2015]: time="2025-07-12T00:09:11.309616399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:09:11.313609 containerd[2015]: time="2025-07-12T00:09:11.313525159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.067772907s" Jul 12 00:09:11.313609 containerd[2015]: time="2025-07-12T00:09:11.313600099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:09:11.319630 containerd[2015]: time="2025-07-12T00:09:11.319559431Z" level=info msg="CreateContainer within sandbox \"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:09:11.357049 containerd[2015]: time="2025-07-12T00:09:11.356976679Z" level=info msg="StopContainer for \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\" returns successfully" Jul 12 00:09:11.358480 containerd[2015]: time="2025-07-12T00:09:11.357752095Z" level=info msg="StopPodSandbox for \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\"" Jul 12 00:09:11.358480 containerd[2015]: time="2025-07-12T00:09:11.357839035Z" level=info msg="Container to stop \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:11.370124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567244290.mount: Deactivated successfully. Jul 12 00:09:11.370771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e-shm.mount: Deactivated successfully. Jul 12 00:09:11.400986 containerd[2015]: time="2025-07-12T00:09:11.400924507Z" level=info msg="CreateContainer within sandbox \"6d178240bbf7778a1b24de93c805fd6582918c366aeaba6fd67b116ab9cdcea5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b6a8a1b9a16f3829b3c12dfc45b0c16473b1268abb09fdca27f456e0c9bfd7cc\"" Jul 12 00:09:11.411532 containerd[2015]: time="2025-07-12T00:09:11.406427791Z" level=info msg="StartContainer for \"b6a8a1b9a16f3829b3c12dfc45b0c16473b1268abb09fdca27f456e0c9bfd7cc\"" Jul 12 00:09:11.437262 systemd[1]: Started sshd@17-172.31.18.25:22-139.178.89.65:45706.service - OpenSSH per-connection server daemon (139.178.89.65:45706). Jul 12 00:09:11.510986 systemd[1]: cri-containerd-12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e.scope: Deactivated successfully. Jul 12 00:09:11.548835 systemd[1]: Started cri-containerd-b6a8a1b9a16f3829b3c12dfc45b0c16473b1268abb09fdca27f456e0c9bfd7cc.scope - libcontainer container b6a8a1b9a16f3829b3c12dfc45b0c16473b1268abb09fdca27f456e0c9bfd7cc. Jul 12 00:09:11.688098 containerd[2015]: time="2025-07-12T00:09:11.687347961Z" level=info msg="shim disconnected" id=12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e namespace=k8s.io Jul 12 00:09:11.688098 containerd[2015]: time="2025-07-12T00:09:11.687437829Z" level=warning msg="cleaning up after shim disconnected" id=12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e namespace=k8s.io Jul 12 00:09:11.688098 containerd[2015]: time="2025-07-12T00:09:11.687480933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:11.708264 sshd[6706]: Accepted publickey for core from 139.178.89.65 port 45706 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:11.718329 sshd[6706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:11.738130 systemd-logind[1993]: New session 15 of user core. Jul 12 00:09:11.776178 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:09:11.945177 systemd-networkd[1936]: calif45ae8c1942: Link DOWN Jul 12 00:09:11.945202 systemd-networkd[1936]: calif45ae8c1942: Lost carrier Jul 12 00:09:12.077713 systemd[1]: run-containerd-runc-k8s.io-b6a8a1b9a16f3829b3c12dfc45b0c16473b1268abb09fdca27f456e0c9bfd7cc-runc.7S0dYj.mount: Deactivated successfully. Jul 12 00:09:12.078005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e-rootfs.mount: Deactivated successfully. Jul 12 00:09:12.199931 sshd[6706]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:12.209804 systemd[1]: sshd@17-172.31.18.25:22-139.178.89.65:45706.service: Deactivated successfully. Jul 12 00:09:12.216015 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:09:12.226420 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:09:12.229161 systemd-logind[1993]: Removed session 15. Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:11.938 [INFO][6767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:11.939 [INFO][6767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" iface="eth0" netns="/var/run/netns/cni-65793675-e86d-fa85-00d7-3083ef1e17a9" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:11.942 [INFO][6767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" iface="eth0" netns="/var/run/netns/cni-65793675-e86d-fa85-00d7-3083ef1e17a9" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:11.961 [INFO][6767] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" after=21.154536ms iface="eth0" netns="/var/run/netns/cni-65793675-e86d-fa85-00d7-3083ef1e17a9" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:11.961 [INFO][6767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:11.961 [INFO][6767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.124 [INFO][6785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.124 [INFO][6785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.125 [INFO][6785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.256 [INFO][6785] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.256 [INFO][6785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.263 [INFO][6785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:12.282812 containerd[2015]: 2025-07-12 00:09:12.269 [INFO][6767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:09:12.293026 containerd[2015]: time="2025-07-12T00:09:12.285848120Z" level=info msg="TearDown network for sandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" successfully" Jul 12 00:09:12.293026 containerd[2015]: time="2025-07-12T00:09:12.285897932Z" level=info msg="StopPodSandbox for \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" returns successfully" Jul 12 00:09:12.291577 systemd[1]: run-netns-cni\x2d65793675\x2de86d\x2dfa85\x2d00d7\x2d3083ef1e17a9.mount: Deactivated successfully. Jul 12 00:09:12.370334 containerd[2015]: time="2025-07-12T00:09:12.370276400Z" level=info msg="StartContainer for \"b6a8a1b9a16f3829b3c12dfc45b0c16473b1268abb09fdca27f456e0c9bfd7cc\" returns successfully" Jul 12 00:09:12.383636 kubelet[3344]: I0712 00:09:12.382008 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-calico-apiserver-certs\") pod \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\" (UID: \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\") " Jul 12 00:09:12.383636 kubelet[3344]: I0712 00:09:12.382248 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n46tv\" (UniqueName: \"kubernetes.io/projected/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-kube-api-access-n46tv\") pod \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\" (UID: \"f1aedc09-28b8-4374-aef6-1a1d5f40a7ca\") " Jul 12 00:09:12.401979 systemd[1]: var-lib-kubelet-pods-f1aedc09\x2d28b8\x2d4374\x2daef6\x2d1a1d5f40a7ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn46tv.mount: Deactivated successfully. Jul 12 00:09:12.413560 kubelet[3344]: I0712 00:09:12.412786 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-kube-api-access-n46tv" (OuterVolumeSpecName: "kube-api-access-n46tv") pod "f1aedc09-28b8-4374-aef6-1a1d5f40a7ca" (UID: "f1aedc09-28b8-4374-aef6-1a1d5f40a7ca"). InnerVolumeSpecName "kube-api-access-n46tv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:09:12.417735 kubelet[3344]: I0712 00:09:12.416125 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f1aedc09-28b8-4374-aef6-1a1d5f40a7ca" (UID: "f1aedc09-28b8-4374-aef6-1a1d5f40a7ca"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:09:12.418505 systemd[1]: var-lib-kubelet-pods-f1aedc09\x2d28b8\x2d4374\x2daef6\x2d1a1d5f40a7ca-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 12 00:09:12.483354 kubelet[3344]: I0712 00:09:12.483176 3344 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-calico-apiserver-certs\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:09:12.483354 kubelet[3344]: I0712 00:09:12.483240 3344 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n46tv\" (UniqueName: \"kubernetes.io/projected/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca-kube-api-access-n46tv\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:09:12.615639 kubelet[3344]: I0712 00:09:12.615561 3344 scope.go:117] "RemoveContainer" containerID="0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742" Jul 12 00:09:12.630357 containerd[2015]: time="2025-07-12T00:09:12.629905869Z" level=info msg="RemoveContainer for \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\"" Jul 12 00:09:12.642083 containerd[2015]: time="2025-07-12T00:09:12.641903241Z" level=info msg="RemoveContainer for \"0f90b6438a57a8b13f5d40ffa5cffdcb9518b5f9935b3bf1cc264192771c7742\" returns successfully" Jul 12 00:09:12.648108 systemd[1]: Removed slice kubepods-besteffort-podf1aedc09_28b8_4374_aef6_1a1d5f40a7ca.slice - libcontainer container kubepods-besteffort-podf1aedc09_28b8_4374_aef6_1a1d5f40a7ca.slice. Jul 12 00:09:12.649433 systemd[1]: kubepods-besteffort-podf1aedc09_28b8_4374_aef6_1a1d5f40a7ca.slice: Consumed 1.301s CPU time. Jul 12 00:09:12.682632 kubelet[3344]: I0712 00:09:12.682523 3344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-748mg" podStartSLOduration=28.945597279 podStartE2EDuration="48.682497657s" podCreationTimestamp="2025-07-12 00:08:24 +0000 UTC" firstStartedPulling="2025-07-12 00:08:51.579286921 +0000 UTC m=+56.286128609" lastFinishedPulling="2025-07-12 00:09:11.316187299 +0000 UTC m=+76.023028987" observedRunningTime="2025-07-12 00:09:12.678080853 +0000 UTC m=+77.384922637" watchObservedRunningTime="2025-07-12 00:09:12.682497657 +0000 UTC m=+77.389339417" Jul 12 00:09:12.855725 kubelet[3344]: I0712 00:09:12.855673 3344 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:09:12.855725 kubelet[3344]: I0712 00:09:12.855732 3344 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:09:13.618728 kubelet[3344]: I0712 00:09:13.618607 3344 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1aedc09-28b8-4374-aef6-1a1d5f40a7ca" path="/var/lib/kubelet/pods/f1aedc09-28b8-4374-aef6-1a1d5f40a7ca/volumes" Jul 12 00:09:14.125628 systemd[1]: run-containerd-runc-k8s.io-bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814-runc.LtCqEz.mount: Deactivated successfully. Jul 12 00:09:15.267042 ntpd[1986]: Deleting interface #16 calif45ae8c1942, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=20 secs Jul 12 00:09:15.267658 ntpd[1986]: 12 Jul 00:09:15 ntpd[1986]: Deleting interface #16 calif45ae8c1942, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=20 secs Jul 12 00:09:17.249180 systemd[1]: Started sshd@18-172.31.18.25:22-139.178.89.65:45708.service - OpenSSH per-connection server daemon (139.178.89.65:45708). Jul 12 00:09:17.430473 sshd[6852]: Accepted publickey for core from 139.178.89.65 port 45708 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:17.434422 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:17.448333 systemd-logind[1993]: New session 16 of user core. Jul 12 00:09:17.457956 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:09:17.799616 sshd[6852]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:17.812506 systemd[1]: sshd@18-172.31.18.25:22-139.178.89.65:45708.service: Deactivated successfully. Jul 12 00:09:17.819377 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:09:17.824421 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:09:17.828299 systemd-logind[1993]: Removed session 16. Jul 12 00:09:22.840005 systemd[1]: Started sshd@19-172.31.18.25:22-139.178.89.65:60716.service - OpenSSH per-connection server daemon (139.178.89.65:60716). Jul 12 00:09:23.019846 sshd[6870]: Accepted publickey for core from 139.178.89.65 port 60716 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:23.022816 sshd[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:23.032314 systemd-logind[1993]: New session 17 of user core. Jul 12 00:09:23.037721 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:09:23.373837 sshd[6870]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:23.380598 systemd[1]: sshd@19-172.31.18.25:22-139.178.89.65:60716.service: Deactivated successfully. Jul 12 00:09:23.388440 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:09:23.390499 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:09:23.393402 systemd-logind[1993]: Removed session 17. Jul 12 00:09:28.413057 systemd[1]: Started sshd@20-172.31.18.25:22-139.178.89.65:60732.service - OpenSSH per-connection server daemon (139.178.89.65:60732). Jul 12 00:09:28.606934 sshd[6907]: Accepted publickey for core from 139.178.89.65 port 60732 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:28.614169 sshd[6907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:28.630370 systemd-logind[1993]: New session 18 of user core. Jul 12 00:09:28.638806 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:09:28.921048 sshd[6907]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:28.929167 systemd[1]: sshd@20-172.31.18.25:22-139.178.89.65:60732.service: Deactivated successfully. Jul 12 00:09:28.933233 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:09:28.935281 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:09:28.938768 systemd-logind[1993]: Removed session 18. Jul 12 00:09:28.962056 systemd[1]: Started sshd@21-172.31.18.25:22-139.178.89.65:60736.service - OpenSSH per-connection server daemon (139.178.89.65:60736). Jul 12 00:09:29.143495 sshd[6920]: Accepted publickey for core from 139.178.89.65 port 60736 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:29.144794 sshd[6920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:29.155563 systemd-logind[1993]: New session 19 of user core. Jul 12 00:09:29.166808 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:09:29.839376 sshd[6920]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:29.851359 systemd[1]: sshd@21-172.31.18.25:22-139.178.89.65:60736.service: Deactivated successfully. Jul 12 00:09:29.860682 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:09:29.864155 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:09:29.895116 systemd[1]: Started sshd@22-172.31.18.25:22-139.178.89.65:49090.service - OpenSSH per-connection server daemon (139.178.89.65:49090). Jul 12 00:09:29.900111 systemd-logind[1993]: Removed session 19. Jul 12 00:09:30.092316 sshd[6956]: Accepted publickey for core from 139.178.89.65 port 49090 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:30.095542 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:30.105784 systemd-logind[1993]: New session 20 of user core. Jul 12 00:09:30.111731 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:09:31.304389 sshd[6956]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:31.315478 systemd[1]: sshd@22-172.31.18.25:22-139.178.89.65:49090.service: Deactivated successfully. Jul 12 00:09:31.324496 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:09:31.332590 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:09:31.359072 systemd[1]: Started sshd@23-172.31.18.25:22-139.178.89.65:49106.service - OpenSSH per-connection server daemon (139.178.89.65:49106). Jul 12 00:09:31.361730 systemd-logind[1993]: Removed session 20. Jul 12 00:09:31.546232 sshd[6980]: Accepted publickey for core from 139.178.89.65 port 49106 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:31.549105 sshd[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:31.557394 systemd-logind[1993]: New session 21 of user core. Jul 12 00:09:31.567764 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:09:32.116380 sshd[6980]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:32.125241 systemd[1]: sshd@23-172.31.18.25:22-139.178.89.65:49106.service: Deactivated successfully. Jul 12 00:09:32.130354 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:09:32.132986 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:09:32.135046 systemd-logind[1993]: Removed session 21. Jul 12 00:09:32.153047 systemd[1]: Started sshd@24-172.31.18.25:22-139.178.89.65:49116.service - OpenSSH per-connection server daemon (139.178.89.65:49116). Jul 12 00:09:32.326114 sshd[6991]: Accepted publickey for core from 139.178.89.65 port 49116 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:32.330028 sshd[6991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:32.343936 systemd-logind[1993]: New session 22 of user core. Jul 12 00:09:32.352798 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:09:32.612896 sshd[6991]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:32.620687 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:09:32.621816 systemd[1]: sshd@24-172.31.18.25:22-139.178.89.65:49116.service: Deactivated successfully. Jul 12 00:09:32.625824 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:09:32.629608 systemd-logind[1993]: Removed session 22. Jul 12 00:09:37.660013 systemd[1]: Started sshd@25-172.31.18.25:22-139.178.89.65:49132.service - OpenSSH per-connection server daemon (139.178.89.65:49132). Jul 12 00:09:37.838352 sshd[7006]: Accepted publickey for core from 139.178.89.65 port 49132 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:37.841281 sshd[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:37.849918 systemd-logind[1993]: New session 23 of user core. Jul 12 00:09:37.856794 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:09:38.113197 sshd[7006]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:38.119803 systemd[1]: sshd@25-172.31.18.25:22-139.178.89.65:49132.service: Deactivated successfully. Jul 12 00:09:38.124723 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:09:38.126194 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:09:38.128793 systemd-logind[1993]: Removed session 23. Jul 12 00:09:43.158007 systemd[1]: Started sshd@26-172.31.18.25:22-139.178.89.65:42944.service - OpenSSH per-connection server daemon (139.178.89.65:42944). Jul 12 00:09:43.337881 sshd[7021]: Accepted publickey for core from 139.178.89.65 port 42944 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:43.342382 sshd[7021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:43.351567 systemd-logind[1993]: New session 24 of user core. Jul 12 00:09:43.358800 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:09:43.642438 sshd[7021]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:43.653265 systemd[1]: sshd@26-172.31.18.25:22-139.178.89.65:42944.service: Deactivated successfully. Jul 12 00:09:43.657687 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:09:43.659688 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:09:43.664380 systemd-logind[1993]: Removed session 24. Jul 12 00:09:48.685966 systemd[1]: Started sshd@27-172.31.18.25:22-139.178.89.65:42960.service - OpenSSH per-connection server daemon (139.178.89.65:42960). Jul 12 00:09:48.885638 sshd[7058]: Accepted publickey for core from 139.178.89.65 port 42960 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:48.889109 sshd[7058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:48.901698 systemd-logind[1993]: New session 25 of user core. Jul 12 00:09:48.910040 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:09:49.177880 sshd[7058]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:49.188351 systemd[1]: sshd@27-172.31.18.25:22-139.178.89.65:42960.service: Deactivated successfully. Jul 12 00:09:49.196405 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:09:49.203003 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:09:49.207205 systemd-logind[1993]: Removed session 25. Jul 12 00:09:54.221187 systemd[1]: Started sshd@28-172.31.18.25:22-139.178.89.65:58966.service - OpenSSH per-connection server daemon (139.178.89.65:58966). Jul 12 00:09:54.414295 sshd[7073]: Accepted publickey for core from 139.178.89.65 port 58966 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:54.417994 sshd[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:54.428574 systemd-logind[1993]: New session 26 of user core. Jul 12 00:09:54.437223 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:09:54.734357 sshd[7073]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:54.747684 systemd[1]: sshd@28-172.31.18.25:22-139.178.89.65:58966.service: Deactivated successfully. Jul 12 00:09:54.748527 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:09:54.757004 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:09:54.760748 systemd-logind[1993]: Removed session 26. Jul 12 00:09:56.090944 containerd[2015]: time="2025-07-12T00:09:56.090663637Z" level=info msg="StopContainer for \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\" with timeout 30 (s)" Jul 12 00:09:56.094348 containerd[2015]: time="2025-07-12T00:09:56.093635053Z" level=info msg="Stop container \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\" with signal terminated" Jul 12 00:09:56.177128 systemd[1]: cri-containerd-08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae.scope: Deactivated successfully. Jul 12 00:09:56.179733 systemd[1]: cri-containerd-08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae.scope: Consumed 1.846s CPU time. Jul 12 00:09:56.252858 containerd[2015]: time="2025-07-12T00:09:56.252089678Z" level=info msg="shim disconnected" id=08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae namespace=k8s.io Jul 12 00:09:56.252858 containerd[2015]: time="2025-07-12T00:09:56.252173042Z" level=warning msg="cleaning up after shim disconnected" id=08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae namespace=k8s.io Jul 12 00:09:56.252858 containerd[2015]: time="2025-07-12T00:09:56.252194462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:56.261357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae-rootfs.mount: Deactivated successfully. Jul 12 00:09:56.337376 containerd[2015]: time="2025-07-12T00:09:56.337282574Z" level=info msg="StopContainer for \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\" returns successfully" Jul 12 00:09:56.339956 containerd[2015]: time="2025-07-12T00:09:56.339618530Z" level=info msg="StopPodSandbox for \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\"" Jul 12 00:09:56.339956 containerd[2015]: time="2025-07-12T00:09:56.339705122Z" level=info msg="Container to stop \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:56.351066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f-shm.mount: Deactivated successfully. Jul 12 00:09:56.388298 systemd[1]: cri-containerd-4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f.scope: Deactivated successfully. Jul 12 00:09:56.495620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f-rootfs.mount: Deactivated successfully. Jul 12 00:09:56.515641 containerd[2015]: time="2025-07-12T00:09:56.515533995Z" level=info msg="shim disconnected" id=4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f namespace=k8s.io Jul 12 00:09:56.515641 containerd[2015]: time="2025-07-12T00:09:56.515627703Z" level=warning msg="cleaning up after shim disconnected" id=4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f namespace=k8s.io Jul 12 00:09:56.515641 containerd[2015]: time="2025-07-12T00:09:56.515650371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:56.751588 systemd-networkd[1936]: calidd3f6ed32e2: Link DOWN Jul 12 00:09:56.751610 systemd-networkd[1936]: calidd3f6ed32e2: Lost carrier Jul 12 00:09:56.827622 kubelet[3344]: I0712 00:09:56.827215 3344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.726 [INFO][7182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.726 [INFO][7182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" iface="eth0" netns="/var/run/netns/cni-2ede6844-295c-7cf0-0929-6b786e34bb64" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.729 [INFO][7182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" iface="eth0" netns="/var/run/netns/cni-2ede6844-295c-7cf0-0929-6b786e34bb64" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.765 [INFO][7182] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" after=39.203988ms iface="eth0" netns="/var/run/netns/cni-2ede6844-295c-7cf0-0929-6b786e34bb64" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.765 [INFO][7182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.766 [INFO][7182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.859 [INFO][7207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.859 [INFO][7207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:56.860 [INFO][7207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:57.064 [INFO][7207] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:57.064 [INFO][7207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:57.070 [INFO][7207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:09:57.080802 containerd[2015]: 2025-07-12 00:09:57.075 [INFO][7182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:09:57.081816 containerd[2015]: time="2025-07-12T00:09:57.081641102Z" level=info msg="TearDown network for sandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" successfully" Jul 12 00:09:57.081816 containerd[2015]: time="2025-07-12T00:09:57.081688994Z" level=info msg="StopPodSandbox for \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" returns successfully" Jul 12 00:09:57.149121 kubelet[3344]: I0712 00:09:57.145882 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzt84\" (UniqueName: \"kubernetes.io/projected/9395332f-6218-4a3b-9efb-4b6737b7fd9d-kube-api-access-wzt84\") pod \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\" (UID: \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\") " Jul 12 00:09:57.149121 kubelet[3344]: I0712 00:09:57.145967 3344 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9395332f-6218-4a3b-9efb-4b6737b7fd9d-calico-apiserver-certs\") pod \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\" (UID: \"9395332f-6218-4a3b-9efb-4b6737b7fd9d\") " Jul 12 00:09:57.181431 kubelet[3344]: I0712 00:09:57.181374 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9395332f-6218-4a3b-9efb-4b6737b7fd9d-kube-api-access-wzt84" (OuterVolumeSpecName: "kube-api-access-wzt84") pod "9395332f-6218-4a3b-9efb-4b6737b7fd9d" (UID: "9395332f-6218-4a3b-9efb-4b6737b7fd9d"). InnerVolumeSpecName "kube-api-access-wzt84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:09:57.182000 kubelet[3344]: I0712 00:09:57.181961 3344 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9395332f-6218-4a3b-9efb-4b6737b7fd9d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9395332f-6218-4a3b-9efb-4b6737b7fd9d" (UID: "9395332f-6218-4a3b-9efb-4b6737b7fd9d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:09:57.246395 kubelet[3344]: I0712 00:09:57.246342 3344 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzt84\" (UniqueName: \"kubernetes.io/projected/9395332f-6218-4a3b-9efb-4b6737b7fd9d-kube-api-access-wzt84\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:09:57.246665 kubelet[3344]: I0712 00:09:57.246640 3344 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9395332f-6218-4a3b-9efb-4b6737b7fd9d-calico-apiserver-certs\") on node \"ip-172-31-18-25\" DevicePath \"\"" Jul 12 00:09:57.257158 systemd[1]: run-containerd-runc-k8s.io-bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055-runc.AbfLXI.mount: Deactivated successfully. Jul 12 00:09:57.257369 systemd[1]: run-netns-cni\x2d2ede6844\x2d295c\x2d7cf0\x2d0929\x2d6b786e34bb64.mount: Deactivated successfully. Jul 12 00:09:57.258623 systemd[1]: var-lib-kubelet-pods-9395332f\x2d6218\x2d4a3b\x2d9efb\x2d4b6737b7fd9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzt84.mount: Deactivated successfully. Jul 12 00:09:57.258837 systemd[1]: var-lib-kubelet-pods-9395332f\x2d6218\x2d4a3b\x2d9efb\x2d4b6737b7fd9d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 12 00:09:57.629613 systemd[1]: Removed slice kubepods-besteffort-pod9395332f_6218_4a3b_9efb_4b6737b7fd9d.slice - libcontainer container kubepods-besteffort-pod9395332f_6218_4a3b_9efb_4b6737b7fd9d.slice. Jul 12 00:09:57.630159 systemd[1]: kubepods-besteffort-pod9395332f_6218_4a3b_9efb_4b6737b7fd9d.slice: Consumed 1.899s CPU time. Jul 12 00:09:59.267030 ntpd[1986]: Deleting interface #15 calidd3f6ed32e2, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Jul 12 00:09:59.267998 ntpd[1986]: 12 Jul 00:09:59 ntpd[1986]: Deleting interface #15 calidd3f6ed32e2, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Jul 12 00:09:59.491958 systemd[1]: run-containerd-runc-k8s.io-bb1ad9d6a4752f2f383afc5c73e4dbb8d81b97f16b9a1afb42d9882b54d40055-runc.B5mq0g.mount: Deactivated successfully. Jul 12 00:09:59.620130 kubelet[3344]: I0712 00:09:59.620049 3344 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9395332f-6218-4a3b-9efb-4b6737b7fd9d" path="/var/lib/kubelet/pods/9395332f-6218-4a3b-9efb-4b6737b7fd9d/volumes" Jul 12 00:09:59.778991 systemd[1]: Started sshd@29-172.31.18.25:22-139.178.89.65:59492.service - OpenSSH per-connection server daemon (139.178.89.65:59492). Jul 12 00:09:59.961479 sshd[7249]: Accepted publickey for core from 139.178.89.65 port 59492 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:59.965404 sshd[7249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:59.982506 systemd-logind[1993]: New session 27 of user core. Jul 12 00:09:59.990195 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:10:00.201523 kubelet[3344]: I0712 00:10:00.201471 3344 scope.go:117] "RemoveContainer" containerID="08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae" Jul 12 00:10:00.205344 containerd[2015]: time="2025-07-12T00:10:00.205279338Z" level=info msg="RemoveContainer for \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\"" Jul 12 00:10:00.215199 containerd[2015]: time="2025-07-12T00:10:00.215022594Z" level=info msg="RemoveContainer for \"08f83fb1d3d7dccc5a9fc5c46591db476a1102aceb71396eda39643db71f73ae\" returns successfully" Jul 12 00:10:00.219846 containerd[2015]: time="2025-07-12T00:10:00.219755022Z" level=info msg="StopPodSandbox for \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\"" Jul 12 00:10:00.407800 sshd[7249]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:00.422730 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:10:00.430112 systemd[1]: sshd@29-172.31.18.25:22-139.178.89.65:59492.service: Deactivated successfully. Jul 12 00:10:00.446825 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:10:00.450614 systemd-logind[1993]: Removed session 27. Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.388 [WARNING][7267] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.389 [INFO][7267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.389 [INFO][7267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" iface="eth0" netns="" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.389 [INFO][7267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.389 [INFO][7267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.480 [INFO][7274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.482 [INFO][7274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.482 [INFO][7274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.525 [WARNING][7274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.526 [INFO][7274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.536 [INFO][7274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:00.547088 containerd[2015]: 2025-07-12 00:10:00.540 [INFO][7267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.547088 containerd[2015]: time="2025-07-12T00:10:00.546227815Z" level=info msg="TearDown network for sandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" successfully" Jul 12 00:10:00.547088 containerd[2015]: time="2025-07-12T00:10:00.546265915Z" level=info msg="StopPodSandbox for \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" returns successfully" Jul 12 00:10:00.550112 containerd[2015]: time="2025-07-12T00:10:00.547432963Z" level=info msg="RemovePodSandbox for \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\"" Jul 12 00:10:00.550112 containerd[2015]: time="2025-07-12T00:10:00.547505035Z" level=info msg="Forcibly stopping sandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\"" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.632 [WARNING][7292] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.632 [INFO][7292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.632 [INFO][7292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" iface="eth0" netns="" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.632 [INFO][7292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.632 [INFO][7292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.703 [INFO][7299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.706 [INFO][7299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.707 [INFO][7299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.728 [WARNING][7299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.728 [INFO][7299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" HandleID="k8s-pod-network.12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--xspm9-eth0" Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.733 [INFO][7299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:00.742948 containerd[2015]: 2025-07-12 00:10:00.736 [INFO][7292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e" Jul 12 00:10:00.743727 containerd[2015]: time="2025-07-12T00:10:00.742982864Z" level=info msg="TearDown network for sandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" successfully" Jul 12 00:10:00.753205 containerd[2015]: time="2025-07-12T00:10:00.752803760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:10:00.753205 containerd[2015]: time="2025-07-12T00:10:00.752920496Z" level=info msg="RemovePodSandbox \"12d5a27dd1aa247ebd7752f8327d7675a0f18674dcf9886b6cbfbf46a1d2127e\" returns successfully" Jul 12 00:10:00.754482 containerd[2015]: time="2025-07-12T00:10:00.754388996Z" level=info msg="StopPodSandbox for \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\"" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.857 [WARNING][7313] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.858 [INFO][7313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.858 [INFO][7313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" iface="eth0" netns="" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.858 [INFO][7313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.858 [INFO][7313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.911 [INFO][7320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.911 [INFO][7320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.911 [INFO][7320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.925 [WARNING][7320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.925 [INFO][7320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.929 [INFO][7320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:00.937953 containerd[2015]: 2025-07-12 00:10:00.934 [INFO][7313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:00.940161 containerd[2015]: time="2025-07-12T00:10:00.937994625Z" level=info msg="TearDown network for sandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" successfully" Jul 12 00:10:00.940161 containerd[2015]: time="2025-07-12T00:10:00.938032857Z" level=info msg="StopPodSandbox for \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" returns successfully" Jul 12 00:10:00.940161 containerd[2015]: time="2025-07-12T00:10:00.938815101Z" level=info msg="RemovePodSandbox for \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\"" Jul 12 00:10:00.940161 containerd[2015]: time="2025-07-12T00:10:00.938865753Z" level=info msg="Forcibly stopping sandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\"" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.004 [WARNING][7334] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" WorkloadEndpoint="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.004 [INFO][7334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.004 [INFO][7334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" iface="eth0" netns="" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.004 [INFO][7334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.004 [INFO][7334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.052 [INFO][7341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.053 [INFO][7341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.053 [INFO][7341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.067 [WARNING][7341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.067 [INFO][7341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" HandleID="k8s-pod-network.4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Workload="ip--172--31--18--25-k8s-calico--apiserver--f6d8df55--scvhw-eth0" Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.070 [INFO][7341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:10:01.077850 containerd[2015]: 2025-07-12 00:10:01.073 [INFO][7334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f" Jul 12 00:10:01.077850 containerd[2015]: time="2025-07-12T00:10:01.077061714Z" level=info msg="TearDown network for sandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" successfully" Jul 12 00:10:01.086813 containerd[2015]: time="2025-07-12T00:10:01.086730714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:10:01.087076 containerd[2015]: time="2025-07-12T00:10:01.086836854Z" level=info msg="RemovePodSandbox \"4d423ad71fdea4cc569dda521f77d9ff396a7ea2c2e19e416cebe7d964663d2f\" returns successfully" Jul 12 00:10:05.454430 systemd[1]: Started sshd@30-172.31.18.25:22-139.178.89.65:59504.service - OpenSSH per-connection server daemon (139.178.89.65:59504). Jul 12 00:10:05.648312 sshd[7370]: Accepted publickey for core from 139.178.89.65 port 59504 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:10:05.653803 sshd[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:05.665084 systemd-logind[1993]: New session 28 of user core. Jul 12 00:10:05.673808 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 12 00:10:05.967157 sshd[7370]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:05.977565 systemd[1]: sshd@30-172.31.18.25:22-139.178.89.65:59504.service: Deactivated successfully. Jul 12 00:10:05.983098 systemd[1]: session-28.scope: Deactivated successfully. Jul 12 00:10:05.986093 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Jul 12 00:10:05.991411 systemd-logind[1993]: Removed session 28. Jul 12 00:10:44.121587 systemd[1]: run-containerd-runc-k8s.io-bb42760c5439fb23f1def3b250a130200888e18f9856183fb37b2cd287038814-runc.G1Y0eY.mount: Deactivated successfully. Jul 12 00:10:51.939825 systemd[1]: cri-containerd-8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26.scope: Deactivated successfully. Jul 12 00:10:51.941421 systemd[1]: cri-containerd-8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26.scope: Consumed 32.607s CPU time. Jul 12 00:10:51.984376 containerd[2015]: time="2025-07-12T00:10:51.984260483Z" level=info msg="shim disconnected" id=8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26 namespace=k8s.io Jul 12 00:10:51.984376 containerd[2015]: time="2025-07-12T00:10:51.984351395Z" level=warning msg="cleaning up after shim disconnected" id=8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26 namespace=k8s.io Jul 12 00:10:51.984376 containerd[2015]: time="2025-07-12T00:10:51.984379115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:51.992057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26-rootfs.mount: Deactivated successfully. Jul 12 00:10:52.699847 systemd[1]: cri-containerd-978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53.scope: Deactivated successfully. Jul 12 00:10:52.702650 systemd[1]: cri-containerd-978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53.scope: Consumed 6.456s CPU time, 21.5M memory peak, 0B memory swap peak. Jul 12 00:10:52.762528 containerd[2015]: time="2025-07-12T00:10:52.762057443Z" level=info msg="shim disconnected" id=978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53 namespace=k8s.io Jul 12 00:10:52.763798 containerd[2015]: time="2025-07-12T00:10:52.763234475Z" level=warning msg="cleaning up after shim disconnected" id=978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53 namespace=k8s.io Jul 12 00:10:52.763798 containerd[2015]: time="2025-07-12T00:10:52.763395851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:52.764391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53-rootfs.mount: Deactivated successfully. Jul 12 00:10:53.026604 kubelet[3344]: I0712 00:10:53.025478 3344 scope.go:117] "RemoveContainer" containerID="978c2be460bbd59ca4b8417a7f11a3502413100f40ae3614c28d8cc56cae6b53" Jul 12 00:10:53.031206 containerd[2015]: time="2025-07-12T00:10:53.030619844Z" level=info msg="CreateContainer within sandbox \"b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:10:53.031880 kubelet[3344]: I0712 00:10:53.031402 3344 scope.go:117] "RemoveContainer" containerID="8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26" Jul 12 00:10:53.035886 containerd[2015]: time="2025-07-12T00:10:53.035811368Z" level=info msg="CreateContainer within sandbox \"aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:10:53.092710 containerd[2015]: time="2025-07-12T00:10:53.092637968Z" level=info msg="CreateContainer within sandbox \"aea4347560666649834cb372eb9ca94fbdf8b79c1ad222ce0bc9c852f9c2fec8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806\"" Jul 12 00:10:53.094347 containerd[2015]: time="2025-07-12T00:10:53.093502712Z" level=info msg="StartContainer for \"4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806\"" Jul 12 00:10:53.123566 containerd[2015]: time="2025-07-12T00:10:53.121587428Z" level=info msg="CreateContainer within sandbox \"b1e5399f1eb14935f73887a819eec570cb06cdb060cd13a4bd11773f8a8f02b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"64986f2a1b3f3c518aa1356d547346fa3a27cfdd07417cd82b178f3f40671a8a\"" Jul 12 00:10:53.127917 containerd[2015]: time="2025-07-12T00:10:53.127609748Z" level=info msg="StartContainer for \"64986f2a1b3f3c518aa1356d547346fa3a27cfdd07417cd82b178f3f40671a8a\"" Jul 12 00:10:53.160043 systemd[1]: Started cri-containerd-4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806.scope - libcontainer container 4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806. Jul 12 00:10:53.201978 systemd[1]: Started cri-containerd-64986f2a1b3f3c518aa1356d547346fa3a27cfdd07417cd82b178f3f40671a8a.scope - libcontainer container 64986f2a1b3f3c518aa1356d547346fa3a27cfdd07417cd82b178f3f40671a8a. Jul 12 00:10:53.230973 containerd[2015]: time="2025-07-12T00:10:53.230897121Z" level=info msg="StartContainer for \"4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806\" returns successfully" Jul 12 00:10:53.292977 containerd[2015]: time="2025-07-12T00:10:53.292681209Z" level=info msg="StartContainer for \"64986f2a1b3f3c518aa1356d547346fa3a27cfdd07417cd82b178f3f40671a8a\" returns successfully" Jul 12 00:10:57.621133 systemd[1]: cri-containerd-d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e.scope: Deactivated successfully. Jul 12 00:10:57.622555 systemd[1]: cri-containerd-d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e.scope: Consumed 3.756s CPU time, 15.6M memory peak, 0B memory swap peak. Jul 12 00:10:57.665115 containerd[2015]: time="2025-07-12T00:10:57.664995183Z" level=info msg="shim disconnected" id=d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e namespace=k8s.io Jul 12 00:10:57.665115 containerd[2015]: time="2025-07-12T00:10:57.665100819Z" level=warning msg="cleaning up after shim disconnected" id=d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e namespace=k8s.io Jul 12 00:10:57.666738 containerd[2015]: time="2025-07-12T00:10:57.665126403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:57.668436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e-rootfs.mount: Deactivated successfully. Jul 12 00:10:58.056589 kubelet[3344]: I0712 00:10:58.055927 3344 scope.go:117] "RemoveContainer" containerID="d95737530ed11b31399aa5568ad4f892e8dac78f660007eecb691daece3da67e" Jul 12 00:10:58.059946 containerd[2015]: time="2025-07-12T00:10:58.059747833Z" level=info msg="CreateContainer within sandbox \"a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:10:58.088161 containerd[2015]: time="2025-07-12T00:10:58.088082341Z" level=info msg="CreateContainer within sandbox \"a1e6dce25b5a417a9f60f1ea34e4c59034e7d5833077ab3a572e23d0469c3597\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"105c0694a7e667fbf764e0216a607b4d79c507d94056124c4266938d0b9dfa3b\"" Jul 12 00:10:58.089235 containerd[2015]: time="2025-07-12T00:10:58.089187985Z" level=info msg="StartContainer for \"105c0694a7e667fbf764e0216a607b4d79c507d94056124c4266938d0b9dfa3b\"" Jul 12 00:10:58.151168 systemd[1]: Started cri-containerd-105c0694a7e667fbf764e0216a607b4d79c507d94056124c4266938d0b9dfa3b.scope - libcontainer container 105c0694a7e667fbf764e0216a607b4d79c507d94056124c4266938d0b9dfa3b. Jul 12 00:10:58.224237 containerd[2015]: time="2025-07-12T00:10:58.223873694Z" level=info msg="StartContainer for \"105c0694a7e667fbf764e0216a607b4d79c507d94056124c4266938d0b9dfa3b\" returns successfully" Jul 12 00:10:59.245266 kubelet[3344]: E0712 00:10:59.243910 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:11:04.788016 systemd[1]: cri-containerd-4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806.scope: Deactivated successfully. Jul 12 00:11:04.826932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806-rootfs.mount: Deactivated successfully. Jul 12 00:11:04.839703 containerd[2015]: time="2025-07-12T00:11:04.839620955Z" level=info msg="shim disconnected" id=4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806 namespace=k8s.io Jul 12 00:11:04.839703 containerd[2015]: time="2025-07-12T00:11:04.839701211Z" level=warning msg="cleaning up after shim disconnected" id=4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806 namespace=k8s.io Jul 12 00:11:04.840574 containerd[2015]: time="2025-07-12T00:11:04.839723903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:11:05.081387 kubelet[3344]: I0712 00:11:05.080587 3344 scope.go:117] "RemoveContainer" containerID="8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26" Jul 12 00:11:05.082018 kubelet[3344]: I0712 00:11:05.081832 3344 scope.go:117] "RemoveContainer" containerID="4acec971116f068cd25bec1e97a8b28c3dee63d531028c426363458b0da62806" Jul 12 00:11:05.082153 kubelet[3344]: E0712 00:11:05.082071 3344 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-vcjjv_tigera-operator(9ad41972-0ab9-4f16-9db0-2adc1223d608)\"" pod="tigera-operator/tigera-operator-747864d56d-vcjjv" podUID="9ad41972-0ab9-4f16-9db0-2adc1223d608" Jul 12 00:11:05.083676 containerd[2015]: time="2025-07-12T00:11:05.083611376Z" level=info msg="RemoveContainer for \"8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26\"" Jul 12 00:11:05.090544 containerd[2015]: time="2025-07-12T00:11:05.090478832Z" level=info msg="RemoveContainer for \"8b331184649e5b24885401cca9ae64c1b9bdb9f711bfde7c50d4aa01c26d2c26\" returns successfully" Jul 12 00:11:09.244723 kubelet[3344]: E0712 00:11:09.244617 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"