Aug 19 00:11:56.118560 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 19 00:11:56.118607 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Aug 18 22:15:14 -00 2025 Aug 19 00:11:56.118633 kernel: KASLR disabled due to lack of seed Aug 19 00:11:56.118649 kernel: efi: EFI v2.7 by EDK II Aug 19 00:11:56.118666 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Aug 19 00:11:56.118683 kernel: secureboot: Secure boot disabled Aug 19 00:11:56.118701 kernel: ACPI: Early table checksum verification disabled Aug 19 00:11:56.118716 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 19 00:11:56.118733 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 19 00:11:56.118748 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 19 00:11:56.118765 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 19 00:11:56.118785 kernel: ACPI: FACS 0x0000000078630000 000040 Aug 19 00:11:56.118801 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 19 00:11:56.118817 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 19 00:11:56.118836 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 19 00:11:56.118853 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 19 00:11:56.118874 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 19 00:11:56.118891 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 19 00:11:56.118907 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 19 00:11:56.118923 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 19 00:11:56.118940 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 19 00:11:56.118957 kernel: printk: legacy bootconsole [uart0] enabled Aug 19 00:11:56.119009 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 19 00:11:56.119028 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 19 00:11:56.119045 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Aug 19 00:11:56.119061 kernel: Zone ranges: Aug 19 00:11:56.119078 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 19 00:11:56.119100 kernel: DMA32 empty Aug 19 00:11:56.119116 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 19 00:11:56.119132 kernel: Device empty Aug 19 00:11:56.119148 kernel: Movable zone start for each node Aug 19 00:11:56.119164 kernel: Early memory node ranges Aug 19 00:11:56.119180 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 19 00:11:56.119196 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 19 00:11:56.119212 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 19 00:11:56.119228 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 19 00:11:56.119245 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 19 00:11:56.119261 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 19 00:11:56.119277 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 19 00:11:56.119297 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 19 00:11:56.119320 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 19 00:11:56.119337 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 19 00:11:56.119355 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Aug 19 00:11:56.119371 kernel: psci: probing for conduit method from ACPI. Aug 19 00:11:56.119392 kernel: psci: PSCIv1.0 detected in firmware. Aug 19 00:11:56.119409 kernel: psci: Using standard PSCI v0.2 function IDs Aug 19 00:11:56.119426 kernel: psci: Trusted OS migration not required Aug 19 00:11:56.119443 kernel: psci: SMC Calling Convention v1.1 Aug 19 00:11:56.119460 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 19 00:11:56.119477 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 19 00:11:56.119494 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 19 00:11:56.119511 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 19 00:11:56.119528 kernel: Detected PIPT I-cache on CPU0 Aug 19 00:11:56.119545 kernel: CPU features: detected: GIC system register CPU interface Aug 19 00:11:56.119562 kernel: CPU features: detected: Spectre-v2 Aug 19 00:11:56.119582 kernel: CPU features: detected: Spectre-v3a Aug 19 00:11:56.119600 kernel: CPU features: detected: Spectre-BHB Aug 19 00:11:56.119616 kernel: CPU features: detected: ARM erratum 1742098 Aug 19 00:11:56.119633 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 19 00:11:56.119650 kernel: alternatives: applying boot alternatives Aug 19 00:11:56.119669 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:11:56.119688 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 00:11:56.119705 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 00:11:56.119722 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 00:11:56.119739 kernel: Fallback order for Node 0: 0 Aug 19 00:11:56.119759 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Aug 19 00:11:56.119776 kernel: Policy zone: Normal Aug 19 00:11:56.119793 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 00:11:56.119810 kernel: software IO TLB: area num 2. Aug 19 00:11:56.119827 kernel: software IO TLB: mapped [mem 0x000000006c600000-0x0000000070600000] (64MB) Aug 19 00:11:56.119844 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 19 00:11:56.119861 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 00:11:56.119879 kernel: rcu: RCU event tracing is enabled. Aug 19 00:11:56.119896 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 19 00:11:56.119914 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 00:11:56.119931 kernel: Tracing variant of Tasks RCU enabled. Aug 19 00:11:56.119948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 00:11:56.125057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 19 00:11:56.125090 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 00:11:56.125108 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 00:11:56.125126 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 19 00:11:56.125143 kernel: GICv3: 96 SPIs implemented Aug 19 00:11:56.125161 kernel: GICv3: 0 Extended SPIs implemented Aug 19 00:11:56.125178 kernel: Root IRQ handler: gic_handle_irq Aug 19 00:11:56.125195 kernel: GICv3: GICv3 features: 16 PPIs Aug 19 00:11:56.125212 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 19 00:11:56.125229 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 19 00:11:56.125246 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 19 00:11:56.125263 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Aug 19 00:11:56.125291 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Aug 19 00:11:56.125308 kernel: GICv3: using LPI property table @0x0000000400110000 Aug 19 00:11:56.125325 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 19 00:11:56.125341 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Aug 19 00:11:56.125358 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 00:11:56.125375 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 19 00:11:56.125392 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 19 00:11:56.125409 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 19 00:11:56.125426 kernel: Console: colour dummy device 80x25 Aug 19 00:11:56.125444 kernel: printk: legacy console [tty1] enabled Aug 19 00:11:56.125461 kernel: ACPI: Core revision 20240827 Aug 19 00:11:56.125484 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 19 00:11:56.125502 kernel: pid_max: default: 32768 minimum: 301 Aug 19 00:11:56.125519 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 00:11:56.125536 kernel: landlock: Up and running. Aug 19 00:11:56.125554 kernel: SELinux: Initializing. Aug 19 00:11:56.125571 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:11:56.125589 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:11:56.125606 kernel: rcu: Hierarchical SRCU implementation. Aug 19 00:11:56.125624 kernel: rcu: Max phase no-delay instances is 400. Aug 19 00:11:56.125646 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 00:11:56.125664 kernel: Remapping and enabling EFI services. Aug 19 00:11:56.125681 kernel: smp: Bringing up secondary CPUs ... Aug 19 00:11:56.125698 kernel: Detected PIPT I-cache on CPU1 Aug 19 00:11:56.125715 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 19 00:11:56.125733 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Aug 19 00:11:56.125750 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 19 00:11:56.125768 kernel: smp: Brought up 1 node, 2 CPUs Aug 19 00:11:56.125787 kernel: SMP: Total of 2 processors activated. Aug 19 00:11:56.125818 kernel: CPU: All CPU(s) started at EL1 Aug 19 00:11:56.125836 kernel: CPU features: detected: 32-bit EL0 Support Aug 19 00:11:56.125858 kernel: CPU features: detected: 32-bit EL1 Support Aug 19 00:11:56.125876 kernel: CPU features: detected: CRC32 instructions Aug 19 00:11:56.125894 kernel: alternatives: applying system-wide alternatives Aug 19 00:11:56.125913 kernel: Memory: 3797096K/4030464K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 212024K reserved, 16384K cma-reserved) Aug 19 00:11:56.125932 kernel: devtmpfs: initialized Aug 19 00:11:56.125954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 00:11:56.126015 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 19 00:11:56.126037 kernel: 17056 pages in range for non-PLT usage Aug 19 00:11:56.126056 kernel: 508576 pages in range for PLT usage Aug 19 00:11:56.126074 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 00:11:56.126092 kernel: SMBIOS 3.0.0 present. Aug 19 00:11:56.126110 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 19 00:11:56.126128 kernel: DMI: Memory slots populated: 0/0 Aug 19 00:11:56.126146 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 00:11:56.126170 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 19 00:11:56.126188 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 19 00:11:56.126207 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 19 00:11:56.126225 kernel: audit: initializing netlink subsys (disabled) Aug 19 00:11:56.126242 kernel: audit: type=2000 audit(0.270:1): state=initialized audit_enabled=0 res=1 Aug 19 00:11:56.126260 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 00:11:56.126278 kernel: cpuidle: using governor menu Aug 19 00:11:56.126296 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 19 00:11:56.126314 kernel: ASID allocator initialised with 65536 entries Aug 19 00:11:56.126336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 00:11:56.126354 kernel: Serial: AMBA PL011 UART driver Aug 19 00:11:56.126372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 00:11:56.126391 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 00:11:56.126409 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 19 00:11:56.126427 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 19 00:11:56.126445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 00:11:56.126463 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 00:11:56.126481 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 19 00:11:56.126503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 19 00:11:56.126521 kernel: ACPI: Added _OSI(Module Device) Aug 19 00:11:56.126538 kernel: ACPI: Added _OSI(Processor Device) Aug 19 00:11:56.126556 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 00:11:56.126575 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 00:11:56.126592 kernel: ACPI: Interpreter enabled Aug 19 00:11:56.126611 kernel: ACPI: Using GIC for interrupt routing Aug 19 00:11:56.126628 kernel: ACPI: MCFG table detected, 1 entries Aug 19 00:11:56.126646 kernel: ACPI: CPU0 has been hot-added Aug 19 00:11:56.126667 kernel: ACPI: CPU1 has been hot-added Aug 19 00:11:56.126686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 19 00:11:56.127630 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 00:11:56.127863 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 19 00:11:56.130154 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 19 00:11:56.130363 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 19 00:11:56.130544 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 19 00:11:56.130577 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 19 00:11:56.130597 kernel: acpiphp: Slot [1] registered Aug 19 00:11:56.130616 kernel: acpiphp: Slot [2] registered Aug 19 00:11:56.130634 kernel: acpiphp: Slot [3] registered Aug 19 00:11:56.130652 kernel: acpiphp: Slot [4] registered Aug 19 00:11:56.130670 kernel: acpiphp: Slot [5] registered Aug 19 00:11:56.130688 kernel: acpiphp: Slot [6] registered Aug 19 00:11:56.130708 kernel: acpiphp: Slot [7] registered Aug 19 00:11:56.130728 kernel: acpiphp: Slot [8] registered Aug 19 00:11:56.130747 kernel: acpiphp: Slot [9] registered Aug 19 00:11:56.130771 kernel: acpiphp: Slot [10] registered Aug 19 00:11:56.130790 kernel: acpiphp: Slot [11] registered Aug 19 00:11:56.130808 kernel: acpiphp: Slot [12] registered Aug 19 00:11:56.130826 kernel: acpiphp: Slot [13] registered Aug 19 00:11:56.130844 kernel: acpiphp: Slot [14] registered Aug 19 00:11:56.130862 kernel: acpiphp: Slot [15] registered Aug 19 00:11:56.130880 kernel: acpiphp: Slot [16] registered Aug 19 00:11:56.130898 kernel: acpiphp: Slot [17] registered Aug 19 00:11:56.130916 kernel: acpiphp: Slot [18] registered Aug 19 00:11:56.130938 kernel: acpiphp: Slot [19] registered Aug 19 00:11:56.130956 kernel: acpiphp: Slot [20] registered Aug 19 00:11:56.131029 kernel: acpiphp: Slot [21] registered Aug 19 00:11:56.131051 kernel: acpiphp: Slot [22] registered Aug 19 00:11:56.131071 kernel: acpiphp: Slot [23] registered Aug 19 00:11:56.131090 kernel: acpiphp: Slot [24] registered Aug 19 00:11:56.131108 kernel: acpiphp: Slot [25] registered Aug 19 00:11:56.131127 kernel: acpiphp: Slot [26] registered Aug 19 00:11:56.131146 kernel: acpiphp: Slot [27] registered Aug 19 00:11:56.131165 kernel: acpiphp: Slot [28] registered Aug 19 00:11:56.131192 kernel: acpiphp: Slot [29] registered Aug 19 00:11:56.131210 kernel: acpiphp: Slot [30] registered Aug 19 00:11:56.131230 kernel: acpiphp: Slot [31] registered Aug 19 00:11:56.131249 kernel: PCI host bridge to bus 0000:00 Aug 19 00:11:56.131621 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 19 00:11:56.131867 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 19 00:11:56.132083 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 19 00:11:56.132252 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 19 00:11:56.132502 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Aug 19 00:11:56.132727 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Aug 19 00:11:56.132919 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Aug 19 00:11:56.133149 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Aug 19 00:11:56.133338 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Aug 19 00:11:56.133524 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 19 00:11:56.133728 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Aug 19 00:11:56.133917 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Aug 19 00:11:56.134349 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Aug 19 00:11:56.134539 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Aug 19 00:11:56.135602 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 19 00:11:56.135811 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Aug 19 00:11:56.136078 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Aug 19 00:11:56.136286 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Aug 19 00:11:56.136501 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Aug 19 00:11:56.136697 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Aug 19 00:11:56.136870 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 19 00:11:56.139114 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 19 00:11:56.139302 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 19 00:11:56.139328 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 19 00:11:56.139357 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 19 00:11:56.139410 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 19 00:11:56.139431 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 19 00:11:56.139450 kernel: iommu: Default domain type: Translated Aug 19 00:11:56.139469 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 19 00:11:56.139488 kernel: efivars: Registered efivars operations Aug 19 00:11:56.139506 kernel: vgaarb: loaded Aug 19 00:11:56.139525 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 19 00:11:56.139544 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 00:11:56.139569 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 00:11:56.139588 kernel: pnp: PnP ACPI init Aug 19 00:11:56.139828 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 19 00:11:56.139860 kernel: pnp: PnP ACPI: found 1 devices Aug 19 00:11:56.139879 kernel: NET: Registered PF_INET protocol family Aug 19 00:11:56.139897 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 00:11:56.139916 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 00:11:56.139935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 00:11:56.139960 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 00:11:56.140015 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 00:11:56.140034 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 00:11:56.140053 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:11:56.140071 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:11:56.140089 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 00:11:56.140107 kernel: PCI: CLS 0 bytes, default 64 Aug 19 00:11:56.140125 kernel: kvm [1]: HYP mode not available Aug 19 00:11:56.140144 kernel: Initialise system trusted keyrings Aug 19 00:11:56.140169 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 00:11:56.140187 kernel: Key type asymmetric registered Aug 19 00:11:56.140205 kernel: Asymmetric key parser 'x509' registered Aug 19 00:11:56.140224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 19 00:11:56.140242 kernel: io scheduler mq-deadline registered Aug 19 00:11:56.140261 kernel: io scheduler kyber registered Aug 19 00:11:56.140280 kernel: io scheduler bfq registered Aug 19 00:11:56.140530 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 19 00:11:56.140568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 19 00:11:56.140587 kernel: ACPI: button: Power Button [PWRB] Aug 19 00:11:56.140606 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 19 00:11:56.140624 kernel: ACPI: button: Sleep Button [SLPB] Aug 19 00:11:56.140642 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 00:11:56.140662 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 19 00:11:56.140882 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 19 00:11:56.140912 kernel: printk: legacy console [ttyS0] disabled Aug 19 00:11:56.140931 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 19 00:11:56.140956 kernel: printk: legacy console [ttyS0] enabled Aug 19 00:11:56.141000 kernel: printk: legacy bootconsole [uart0] disabled Aug 19 00:11:56.141022 kernel: thunder_xcv, ver 1.0 Aug 19 00:11:56.141043 kernel: thunder_bgx, ver 1.0 Aug 19 00:11:56.141061 kernel: nicpf, ver 1.0 Aug 19 00:11:56.141079 kernel: nicvf, ver 1.0 Aug 19 00:11:56.141291 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 19 00:11:56.141468 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-19T00:11:55 UTC (1755562315) Aug 19 00:11:56.141500 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 00:11:56.141519 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Aug 19 00:11:56.141537 kernel: NET: Registered PF_INET6 protocol family Aug 19 00:11:56.141555 kernel: watchdog: NMI not fully supported Aug 19 00:11:56.141573 kernel: watchdog: Hard watchdog permanently disabled Aug 19 00:11:56.141591 kernel: Segment Routing with IPv6 Aug 19 00:11:56.141609 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 00:11:56.141627 kernel: NET: Registered PF_PACKET protocol family Aug 19 00:11:56.141645 kernel: Key type dns_resolver registered Aug 19 00:11:56.141666 kernel: registered taskstats version 1 Aug 19 00:11:56.141684 kernel: Loading compiled-in X.509 certificates Aug 19 00:11:56.141703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: becc5a61d1c5dcbcd174f4649c64b863031dbaa8' Aug 19 00:11:56.141721 kernel: Demotion targets for Node 0: null Aug 19 00:11:56.141739 kernel: Key type .fscrypt registered Aug 19 00:11:56.141756 kernel: Key type fscrypt-provisioning registered Aug 19 00:11:56.141774 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 00:11:56.141792 kernel: ima: Allocated hash algorithm: sha1 Aug 19 00:11:56.141810 kernel: ima: No architecture policies found Aug 19 00:11:56.141832 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 19 00:11:56.141850 kernel: clk: Disabling unused clocks Aug 19 00:11:56.141868 kernel: PM: genpd: Disabling unused power domains Aug 19 00:11:56.141886 kernel: Warning: unable to open an initial console. Aug 19 00:11:56.141904 kernel: Freeing unused kernel memory: 38912K Aug 19 00:11:56.141922 kernel: Run /init as init process Aug 19 00:11:56.141940 kernel: with arguments: Aug 19 00:11:56.141958 kernel: /init Aug 19 00:11:56.142024 kernel: with environment: Aug 19 00:11:56.142046 kernel: HOME=/ Aug 19 00:11:56.142071 kernel: TERM=linux Aug 19 00:11:56.142089 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 00:11:56.142109 systemd[1]: Successfully made /usr/ read-only. Aug 19 00:11:56.142133 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:11:56.142153 systemd[1]: Detected virtualization amazon. Aug 19 00:11:56.142173 systemd[1]: Detected architecture arm64. Aug 19 00:11:56.142195 systemd[1]: Running in initrd. Aug 19 00:11:56.142219 systemd[1]: No hostname configured, using default hostname. Aug 19 00:11:56.142243 systemd[1]: Hostname set to . Aug 19 00:11:56.142264 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:11:56.142283 systemd[1]: Queued start job for default target initrd.target. Aug 19 00:11:56.142302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:11:56.142322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:11:56.142342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 00:11:56.142362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:11:56.142386 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 00:11:56.142407 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 00:11:56.142428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 00:11:56.142448 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 00:11:56.142467 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:11:56.142487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:11:56.142506 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:11:56.142529 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:11:56.142549 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:11:56.142568 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:11:56.142588 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:11:56.142607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:11:56.142627 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 00:11:56.142646 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 00:11:56.142665 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:11:56.142688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:11:56.142708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:11:56.142727 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:11:56.142747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 00:11:56.142767 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:11:56.142786 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 00:11:56.142806 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 00:11:56.142825 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 00:11:56.142844 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:11:56.142867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:11:56.142887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:11:56.142907 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 00:11:56.142927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:11:56.142951 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 00:11:56.143989 systemd-journald[257]: Collecting audit messages is disabled. Aug 19 00:11:56.144043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:11:56.144063 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 00:11:56.144091 kernel: Bridge firewalling registered Aug 19 00:11:56.144126 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:11:56.144147 systemd-journald[257]: Journal started Aug 19 00:11:56.144185 systemd-journald[257]: Runtime Journal (/run/log/journal/ec2116da00e41d421672ce4cbc97259e) is 8M, max 75.3M, 67.3M free. Aug 19 00:11:56.091045 systemd-modules-load[259]: Inserted module 'overlay' Aug 19 00:11:56.139444 systemd-modules-load[259]: Inserted module 'br_netfilter' Aug 19 00:11:56.164000 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:11:56.164948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:11:56.176246 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:11:56.182239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:11:56.198362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:11:56.212110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:11:56.226954 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 00:11:56.230238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:11:56.233161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:11:56.256440 systemd-tmpfiles[278]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 00:11:56.268646 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:11:56.275926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:11:56.297405 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:11:56.304860 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 00:11:56.346009 dracut-cmdline[303]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:11:56.389583 systemd-resolved[295]: Positive Trust Anchors: Aug 19 00:11:56.391890 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:11:56.391960 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:11:56.524008 kernel: SCSI subsystem initialized Aug 19 00:11:56.532004 kernel: Loading iSCSI transport class v2.0-870. Aug 19 00:11:56.545031 kernel: iscsi: registered transport (tcp) Aug 19 00:11:56.567106 kernel: iscsi: registered transport (qla4xxx) Aug 19 00:11:56.567191 kernel: QLogic iSCSI HBA Driver Aug 19 00:11:56.602161 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:11:56.635591 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:11:56.645852 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:11:56.653086 kernel: random: crng init done Aug 19 00:11:56.650653 systemd-resolved[295]: Defaulting to hostname 'linux'. Aug 19 00:11:56.655709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:11:56.662296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:11:56.745082 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 00:11:56.753238 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 00:11:56.843015 kernel: raid6: neonx8 gen() 6549 MB/s Aug 19 00:11:56.860016 kernel: raid6: neonx4 gen() 6580 MB/s Aug 19 00:11:56.877014 kernel: raid6: neonx2 gen() 5452 MB/s Aug 19 00:11:56.894019 kernel: raid6: neonx1 gen() 3943 MB/s Aug 19 00:11:56.911015 kernel: raid6: int64x8 gen() 3666 MB/s Aug 19 00:11:56.928017 kernel: raid6: int64x4 gen() 3719 MB/s Aug 19 00:11:56.945017 kernel: raid6: int64x2 gen() 3587 MB/s Aug 19 00:11:56.963016 kernel: raid6: int64x1 gen() 2730 MB/s Aug 19 00:11:56.963073 kernel: raid6: using algorithm neonx4 gen() 6580 MB/s Aug 19 00:11:56.981008 kernel: raid6: .... xor() 4648 MB/s, rmw enabled Aug 19 00:11:56.981069 kernel: raid6: using neon recovery algorithm Aug 19 00:11:56.990819 kernel: xor: measuring software checksum speed Aug 19 00:11:56.990887 kernel: 8regs : 12972 MB/sec Aug 19 00:11:56.992002 kernel: 32regs : 12045 MB/sec Aug 19 00:11:56.994279 kernel: arm64_neon : 8389 MB/sec Aug 19 00:11:56.994322 kernel: xor: using function: 8regs (12972 MB/sec) Aug 19 00:11:57.087023 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 00:11:57.098912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:11:57.105941 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:11:57.152504 systemd-udevd[510]: Using default interface naming scheme 'v255'. Aug 19 00:11:57.162662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:11:57.178700 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 00:11:57.220785 dracut-pre-trigger[521]: rd.md=0: removing MD RAID activation Aug 19 00:11:57.266572 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:11:57.274931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:11:57.408747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:11:57.418333 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 00:11:57.584917 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 19 00:11:57.585018 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 19 00:11:57.593004 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 19 00:11:57.597332 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 19 00:11:57.597682 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 19 00:11:57.597917 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 19 00:11:57.601593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:11:57.601727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:11:57.606663 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:11:57.614169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:11:57.626470 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 19 00:11:57.626754 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1d:5c:e1:e9:59 Aug 19 00:11:57.617740 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:11:57.638120 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 00:11:57.638174 kernel: GPT:9289727 != 16777215 Aug 19 00:11:57.638198 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 00:11:57.638222 kernel: GPT:9289727 != 16777215 Aug 19 00:11:57.640040 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 00:11:57.640079 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:11:57.646266 (udev-worker)[557]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:11:57.675085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:11:57.704013 kernel: nvme nvme0: using unchecked data buffer Aug 19 00:11:57.878016 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 19 00:11:57.906375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 19 00:11:57.917025 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 00:11:57.941715 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 19 00:11:57.959544 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 19 00:11:57.959727 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 19 00:11:57.960657 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:11:57.961424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:11:57.962243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:11:57.970185 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 00:11:58.008071 disk-uuid[690]: Primary Header is updated. Aug 19 00:11:58.008071 disk-uuid[690]: Secondary Entries is updated. Aug 19 00:11:58.008071 disk-uuid[690]: Secondary Header is updated. Aug 19 00:11:57.977066 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 00:11:58.029033 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:11:58.032352 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:11:58.044004 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:11:59.064193 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:11:59.064261 disk-uuid[691]: The operation has completed successfully. Aug 19 00:11:59.245846 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 00:11:59.246109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 00:11:59.331244 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 00:11:59.363725 sh[956]: Success Aug 19 00:11:59.390312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 00:11:59.390384 kernel: device-mapper: uevent: version 1.0.3 Aug 19 00:11:59.394027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 00:11:59.406009 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 19 00:11:59.514346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 00:11:59.526152 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 00:11:59.534079 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 00:11:59.581769 kernel: BTRFS: device fsid 1e492084-d287-4a43-8dc6-ad086a072625 devid 1 transid 45 /dev/mapper/usr (254:0) scanned by mount (991) Aug 19 00:11:59.581840 kernel: BTRFS info (device dm-0): first mount of filesystem 1e492084-d287-4a43-8dc6-ad086a072625 Aug 19 00:11:59.581867 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:11:59.584647 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 00:11:59.739543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 00:11:59.746538 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:11:59.751484 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 00:11:59.756900 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 00:11:59.765215 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 00:11:59.816034 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1024) Aug 19 00:11:59.820503 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:11:59.820577 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:11:59.822749 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 00:11:59.847043 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:11:59.848366 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 00:11:59.855510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 00:11:59.955515 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:11:59.964715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:12:00.040417 systemd-networkd[1160]: lo: Link UP Aug 19 00:12:00.040901 systemd-networkd[1160]: lo: Gained carrier Aug 19 00:12:00.045474 systemd-networkd[1160]: Enumeration completed Aug 19 00:12:00.046337 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:12:00.047112 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:00.047120 systemd-networkd[1160]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:12:00.053349 systemd[1]: Reached target network.target - Network. Aug 19 00:12:00.075041 systemd-networkd[1160]: eth0: Link UP Aug 19 00:12:00.075054 systemd-networkd[1160]: eth0: Gained carrier Aug 19 00:12:00.075077 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:00.091061 systemd-networkd[1160]: eth0: DHCPv4 address 172.31.30.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 19 00:12:00.493594 ignition[1085]: Ignition 2.21.0 Aug 19 00:12:00.493619 ignition[1085]: Stage: fetch-offline Aug 19 00:12:00.495794 ignition[1085]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:00.495821 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:00.502010 ignition[1085]: Ignition finished successfully Aug 19 00:12:00.506274 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:12:00.512069 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 19 00:12:00.557780 ignition[1172]: Ignition 2.21.0 Aug 19 00:12:00.557819 ignition[1172]: Stage: fetch Aug 19 00:12:00.559631 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:00.559672 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:00.561121 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:00.579343 ignition[1172]: PUT result: OK Aug 19 00:12:00.582877 ignition[1172]: parsed url from cmdline: "" Aug 19 00:12:00.582902 ignition[1172]: no config URL provided Aug 19 00:12:00.582921 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 00:12:00.582952 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Aug 19 00:12:00.583033 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:00.588083 ignition[1172]: PUT result: OK Aug 19 00:12:00.588321 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 19 00:12:00.593293 ignition[1172]: GET result: OK Aug 19 00:12:00.593546 ignition[1172]: parsing config with SHA512: 45ad1aae6e4b49027a3222c476346612754bfece83c849d5d9b6f43f9e78641c4647f930ec63253124a6c4e5b768c4329c6ce69468e5fba0d63a363885643287 Aug 19 00:12:00.605884 unknown[1172]: fetched base config from "system" Aug 19 00:12:00.605914 unknown[1172]: fetched base config from "system" Aug 19 00:12:00.605928 unknown[1172]: fetched user config from "aws" Aug 19 00:12:00.608294 ignition[1172]: fetch: fetch complete Aug 19 00:12:00.608310 ignition[1172]: fetch: fetch passed Aug 19 00:12:00.616793 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 19 00:12:00.608420 ignition[1172]: Ignition finished successfully Aug 19 00:12:00.627673 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 00:12:00.676080 ignition[1179]: Ignition 2.21.0 Aug 19 00:12:00.676114 ignition[1179]: Stage: kargs Aug 19 00:12:00.678038 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:00.678070 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:00.678373 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:00.685835 ignition[1179]: PUT result: OK Aug 19 00:12:00.697198 ignition[1179]: kargs: kargs passed Aug 19 00:12:00.697344 ignition[1179]: Ignition finished successfully Aug 19 00:12:00.702957 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 00:12:00.709548 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 00:12:00.769894 ignition[1186]: Ignition 2.21.0 Aug 19 00:12:00.770460 ignition[1186]: Stage: disks Aug 19 00:12:00.771453 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:00.771479 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:00.771639 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:00.782146 ignition[1186]: PUT result: OK Aug 19 00:12:00.789680 ignition[1186]: disks: disks passed Aug 19 00:12:00.790098 ignition[1186]: Ignition finished successfully Aug 19 00:12:00.801149 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 00:12:00.806072 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 00:12:00.809284 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 00:12:00.828506 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:12:00.831076 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:12:00.834558 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:12:00.841245 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 00:12:00.907341 systemd-fsck[1194]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 00:12:00.914883 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 00:12:00.924655 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 00:12:01.065054 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 593a9299-85f8-44ab-a00f-cf95b7233713 r/w with ordered data mode. Quota mode: none. Aug 19 00:12:01.065508 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 00:12:01.071739 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 00:12:01.078943 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:12:01.086179 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 00:12:01.088818 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 00:12:01.088920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 00:12:01.089001 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:12:01.122946 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 00:12:01.130056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 00:12:01.149998 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1213) Aug 19 00:12:01.155758 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:01.155838 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:12:01.155867 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 00:12:01.168412 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:12:01.462718 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 00:12:01.485023 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Aug 19 00:12:01.495459 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 00:12:01.504137 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 00:12:01.579180 systemd-networkd[1160]: eth0: Gained IPv6LL Aug 19 00:12:01.846065 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 00:12:01.852485 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 00:12:01.862764 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 00:12:01.891520 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 00:12:01.894555 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:01.930350 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 00:12:01.945811 ignition[1326]: INFO : Ignition 2.21.0 Aug 19 00:12:01.947957 ignition[1326]: INFO : Stage: mount Aug 19 00:12:01.950360 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:01.954151 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:01.954151 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:01.962590 ignition[1326]: INFO : PUT result: OK Aug 19 00:12:01.968532 ignition[1326]: INFO : mount: mount passed Aug 19 00:12:01.970482 ignition[1326]: INFO : Ignition finished successfully Aug 19 00:12:01.973482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 00:12:01.981171 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 00:12:02.070449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:12:02.116998 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1337) Aug 19 00:12:02.122513 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:02.122581 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:12:02.122608 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 00:12:02.135141 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:12:02.203600 ignition[1354]: INFO : Ignition 2.21.0 Aug 19 00:12:02.203600 ignition[1354]: INFO : Stage: files Aug 19 00:12:02.208815 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:02.208815 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:02.208815 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:02.216815 ignition[1354]: INFO : PUT result: OK Aug 19 00:12:02.226840 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Aug 19 00:12:02.230840 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 00:12:02.230840 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 00:12:02.271745 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 00:12:02.275112 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 00:12:02.275112 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 00:12:02.273191 unknown[1354]: wrote ssh authorized keys file for user: core Aug 19 00:12:02.285333 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 19 00:12:02.285333 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 19 00:12:02.465668 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 00:12:03.058595 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 19 00:12:03.063051 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 19 00:12:03.063051 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 00:12:03.070734 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:12:03.070734 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:12:03.070734 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:12:03.082375 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:12:03.082375 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:12:03.090068 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:12:03.098757 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:12:03.102673 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:12:03.102673 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:12:03.113274 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:12:03.113274 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:12:03.113274 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 19 00:12:03.905711 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 19 00:12:06.169595 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 19 00:12:06.174760 ignition[1354]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 19 00:12:06.178519 ignition[1354]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:12:06.187344 ignition[1354]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:12:06.187344 ignition[1354]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 19 00:12:06.187344 ignition[1354]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 19 00:12:06.198022 ignition[1354]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 00:12:06.198022 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:12:06.198022 ignition[1354]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:12:06.198022 ignition[1354]: INFO : files: files passed Aug 19 00:12:06.198022 ignition[1354]: INFO : Ignition finished successfully Aug 19 00:12:06.204203 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 00:12:06.216073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 00:12:06.223910 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 00:12:06.250845 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 00:12:06.257042 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 00:12:06.274779 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:12:06.278745 initrd-setup-root-after-ignition[1388]: grep: Aug 19 00:12:06.281167 initrd-setup-root-after-ignition[1388]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:12:06.284460 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:12:06.290806 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:12:06.295401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 00:12:06.306411 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 00:12:06.400665 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 00:12:06.400860 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 00:12:06.407308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 00:12:06.410810 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 00:12:06.413315 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 00:12:06.420326 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 00:12:06.468031 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:12:06.474411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 00:12:06.510838 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:12:06.513422 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:12:06.518579 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 00:12:06.521353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 00:12:06.521928 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:12:06.534037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 00:12:06.536795 systemd[1]: Stopped target basic.target - Basic System. Aug 19 00:12:06.543381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 00:12:06.549077 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:12:06.552313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 00:12:06.562989 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:12:06.568136 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 00:12:06.571695 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:12:06.577095 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 00:12:06.584640 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 00:12:06.587600 systemd[1]: Stopped target swap.target - Swaps. Aug 19 00:12:06.593504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 00:12:06.593784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:12:06.601095 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:12:06.604197 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:12:06.612399 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 00:12:06.614671 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:12:06.618081 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 00:12:06.618484 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 00:12:06.628573 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 00:12:06.629178 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:12:06.638127 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 00:12:06.638699 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 00:12:06.648403 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 00:12:06.651079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 00:12:06.651383 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:12:06.685178 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 00:12:06.694201 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 00:12:06.700880 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:12:06.706712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 00:12:06.707494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:12:06.728643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 00:12:06.729328 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 00:12:06.749715 ignition[1408]: INFO : Ignition 2.21.0 Aug 19 00:12:06.749715 ignition[1408]: INFO : Stage: umount Aug 19 00:12:06.753834 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:06.753834 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:06.753834 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:06.763731 ignition[1408]: INFO : PUT result: OK Aug 19 00:12:06.770617 ignition[1408]: INFO : umount: umount passed Aug 19 00:12:06.772786 ignition[1408]: INFO : Ignition finished successfully Aug 19 00:12:06.777026 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 00:12:06.780524 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 00:12:06.783156 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 00:12:06.790479 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 00:12:06.790850 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 00:12:06.797787 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 00:12:06.797913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 00:12:06.798271 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 19 00:12:06.798362 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 19 00:12:06.805198 systemd[1]: Stopped target network.target - Network. Aug 19 00:12:06.808844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 00:12:06.809028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:12:06.809283 systemd[1]: Stopped target paths.target - Path Units. Aug 19 00:12:06.809562 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 00:12:06.813531 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:12:06.818374 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 00:12:06.820762 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 00:12:06.827670 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 00:12:06.827759 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:12:06.832284 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 00:12:06.832374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:12:06.835075 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 00:12:06.835187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 00:12:06.843709 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 00:12:06.843814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 00:12:06.849043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 00:12:06.857348 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 00:12:06.861586 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 00:12:06.861770 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 00:12:06.866213 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 00:12:06.866415 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 00:12:06.882479 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 00:12:06.884817 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 00:12:06.917743 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 00:12:06.920699 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 00:12:06.921528 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 00:12:06.935567 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 00:12:06.938909 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 00:12:06.942897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 00:12:06.943200 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:12:06.957237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 00:12:06.966604 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 00:12:06.966732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:12:06.969862 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:12:06.969993 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:12:06.979145 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 00:12:06.979249 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 00:12:06.982901 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 00:12:06.988367 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:12:06.994134 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:12:07.003862 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 00:12:07.004171 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:12:07.016940 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 00:12:07.017535 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:12:07.030471 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 00:12:07.030621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 00:12:07.041131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 00:12:07.041213 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:12:07.044897 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 00:12:07.045034 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:12:07.057472 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 00:12:07.057587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 00:12:07.068218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 00:12:07.068347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:12:07.080296 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 00:12:07.084123 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 00:12:07.084284 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:12:07.094221 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 00:12:07.100820 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:12:07.107304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:12:07.107416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:12:07.121785 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 00:12:07.122327 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 00:12:07.122425 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:12:07.123413 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 00:12:07.123685 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 00:12:07.144119 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 00:12:07.145440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 00:12:07.149908 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 00:12:07.155907 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 00:12:07.207605 systemd[1]: Switching root. Aug 19 00:12:07.268072 systemd-journald[257]: Journal stopped Aug 19 00:12:09.882609 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Aug 19 00:12:09.882767 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 00:12:09.882820 kernel: SELinux: policy capability open_perms=1 Aug 19 00:12:09.882851 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 00:12:09.882880 kernel: SELinux: policy capability always_check_network=0 Aug 19 00:12:09.882911 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 00:12:09.882939 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 00:12:09.883000 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 00:12:09.883036 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 00:12:09.883077 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 00:12:09.883112 kernel: audit: type=1403 audit(1755562327.691:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 00:12:09.883156 systemd[1]: Successfully loaded SELinux policy in 96.188ms. Aug 19 00:12:09.883214 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.685ms. Aug 19 00:12:09.883261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:12:09.883292 systemd[1]: Detected virtualization amazon. Aug 19 00:12:09.883324 systemd[1]: Detected architecture arm64. Aug 19 00:12:09.883356 systemd[1]: Detected first boot. Aug 19 00:12:09.883388 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:12:09.883421 zram_generator::config[1452]: No configuration found. Aug 19 00:12:09.883462 kernel: NET: Registered PF_VSOCK protocol family Aug 19 00:12:09.883493 systemd[1]: Populated /etc with preset unit settings. Aug 19 00:12:09.883524 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 00:12:09.883555 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 00:12:09.883588 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 00:12:09.883621 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 00:12:09.883654 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 00:12:09.883686 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 00:12:09.883728 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 00:12:09.883759 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 00:12:09.883788 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 00:12:09.883821 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 00:12:09.883852 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 00:12:09.883881 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 00:12:09.883911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:12:09.883940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:12:09.888063 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 00:12:09.888152 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 00:12:09.888212 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 00:12:09.888252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:12:09.888286 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 00:12:09.888319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:12:09.888353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:12:09.888382 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 00:12:09.888415 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 00:12:09.888451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 00:12:09.888486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 00:12:09.888517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:12:09.888552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:12:09.888589 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:12:09.888620 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:12:09.888662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 00:12:09.888692 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 00:12:09.888725 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 00:12:09.888762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:12:09.888791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:12:09.888822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:12:09.888851 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 00:12:09.888880 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 00:12:09.888908 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 00:12:09.888936 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 00:12:09.889018 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 00:12:09.889092 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 00:12:09.889130 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 00:12:09.889161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 00:12:09.889191 systemd[1]: Reached target machines.target - Containers. Aug 19 00:12:09.889224 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 00:12:09.889253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:09.889284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:12:09.889313 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 00:12:09.889345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:12:09.889382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:12:09.889413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:12:09.889442 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 00:12:09.889470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:12:09.889499 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 00:12:09.889530 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 00:12:09.889561 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 00:12:09.889594 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 00:12:09.889625 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 00:12:09.889659 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:09.889690 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:12:09.889717 kernel: loop: module loaded Aug 19 00:12:09.889750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:12:09.889787 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:12:09.889823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 00:12:09.889854 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 00:12:09.889887 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:12:09.889932 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 00:12:09.896014 systemd[1]: Stopped verity-setup.service. Aug 19 00:12:09.896092 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 00:12:09.896131 kernel: fuse: init (API version 7.41) Aug 19 00:12:09.896194 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 00:12:09.896231 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 00:12:09.896263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 00:12:09.896293 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 00:12:09.896323 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 00:12:09.896354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:12:09.896385 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 00:12:09.896425 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 00:12:09.896460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:12:09.896495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:12:09.896530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:12:09.896563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:12:09.896594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 00:12:09.896629 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 00:12:09.896659 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:12:09.896689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:12:09.896726 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:12:09.896760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 00:12:09.896791 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 00:12:09.896821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:12:09.896851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:12:09.896884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:12:09.896914 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 00:12:09.896945 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 00:12:09.897022 kernel: ACPI: bus type drm_connector registered Aug 19 00:12:09.897057 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 00:12:09.897088 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:12:09.897117 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:12:09.897147 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 00:12:09.897182 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:12:09.897215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 00:12:09.897244 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:12:09.897274 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 00:12:09.897305 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 00:12:09.897339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:09.897369 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 00:12:09.897403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:12:09.897432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 00:12:09.897527 systemd-journald[1531]: Collecting audit messages is disabled. Aug 19 00:12:09.897586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 00:12:09.897618 systemd-journald[1531]: Journal started Aug 19 00:12:09.897667 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec2116da00e41d421672ce4cbc97259e) is 8M, max 75.3M, 67.3M free. Aug 19 00:12:09.093883 systemd[1]: Queued start job for default target multi-user.target. Aug 19 00:12:09.109177 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 19 00:12:09.110094 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 00:12:09.903646 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:12:09.969483 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 00:12:09.974342 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 00:12:09.983358 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 00:12:09.996120 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 00:12:10.012415 kernel: loop0: detected capacity change from 0 to 203944 Aug 19 00:12:10.006261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:12:10.052081 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 00:12:10.066436 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 00:12:10.090183 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec2116da00e41d421672ce4cbc97259e is 88.673ms for 933 entries. Aug 19 00:12:10.090183 systemd-journald[1531]: System Journal (/var/log/journal/ec2116da00e41d421672ce4cbc97259e) is 8M, max 195.6M, 187.6M free. Aug 19 00:12:10.201187 systemd-journald[1531]: Received client request to flush runtime journal. Aug 19 00:12:10.096673 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 00:12:10.115066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 00:12:10.208110 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 00:12:10.223181 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:12:10.256055 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 00:12:10.267081 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 00:12:10.273281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:12:10.297032 kernel: loop1: detected capacity change from 0 to 61256 Aug 19 00:12:10.337335 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Aug 19 00:12:10.337381 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Aug 19 00:12:10.356161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:12:10.430105 kernel: loop2: detected capacity change from 0 to 100608 Aug 19 00:12:10.562024 kernel: loop3: detected capacity change from 0 to 119320 Aug 19 00:12:10.674017 kernel: loop4: detected capacity change from 0 to 203944 Aug 19 00:12:10.710025 kernel: loop5: detected capacity change from 0 to 61256 Aug 19 00:12:10.738017 kernel: loop6: detected capacity change from 0 to 100608 Aug 19 00:12:10.755018 kernel: loop7: detected capacity change from 0 to 119320 Aug 19 00:12:10.772370 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 19 00:12:10.774482 (sd-merge)[1609]: Merged extensions into '/usr'. Aug 19 00:12:10.784139 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 00:12:10.784185 systemd[1]: Reloading... Aug 19 00:12:10.956012 zram_generator::config[1633]: No configuration found. Aug 19 00:12:11.460016 ldconfig[1554]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 00:12:11.515815 systemd[1]: Reloading finished in 730 ms. Aug 19 00:12:11.541497 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 00:12:11.544934 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 00:12:11.549084 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 00:12:11.564595 systemd[1]: Starting ensure-sysext.service... Aug 19 00:12:11.574245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:12:11.580596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:12:11.617775 systemd[1]: Reload requested from client PID 1688 ('systemctl') (unit ensure-sysext.service)... Aug 19 00:12:11.617813 systemd[1]: Reloading... Aug 19 00:12:11.640025 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 00:12:11.641369 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 00:12:11.642171 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 00:12:11.642713 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 00:12:11.646750 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 00:12:11.650727 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Aug 19 00:12:11.651087 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Aug 19 00:12:11.661827 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:12:11.662071 systemd-tmpfiles[1689]: Skipping /boot Aug 19 00:12:11.695915 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:12:11.697054 systemd-tmpfiles[1689]: Skipping /boot Aug 19 00:12:11.708696 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Aug 19 00:12:11.869060 zram_generator::config[1723]: No configuration found. Aug 19 00:12:12.274504 (udev-worker)[1745]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:12:12.549542 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 00:12:12.550099 systemd[1]: Reloading finished in 931 ms. Aug 19 00:12:12.577747 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:12:12.604537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:12:12.675477 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:12:12.683682 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 00:12:12.691369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 00:12:12.701299 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:12:12.709575 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:12:12.756911 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 00:12:12.770665 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 00:12:12.777762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:12.781739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:12:12.788217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:12:12.797154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:12:12.799788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:12.800068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:12.808014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:12.808481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:12.808707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:12.821870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:12.827125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:12:12.830092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:12.830340 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:12.830689 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 00:12:12.852517 systemd[1]: Finished ensure-sysext.service. Aug 19 00:12:12.881101 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 00:12:12.891104 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 00:12:12.899629 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 00:12:12.932937 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:12:12.933782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:12:12.970481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:12:12.973033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:12:12.976661 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:12:12.978807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:12:12.982735 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:12:12.994050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:12:12.994769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:12:12.998362 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:12:13.005088 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 00:12:13.028781 augenrules[1941]: No rules Aug 19 00:12:13.033255 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:12:13.034618 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:12:13.121357 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 00:12:13.126109 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:12:13.159487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 19 00:12:13.165447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 00:12:13.233882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:12:13.261253 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 00:12:13.358167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 00:12:13.465187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:12:13.550493 systemd-networkd[1892]: lo: Link UP Aug 19 00:12:13.551163 systemd-networkd[1892]: lo: Gained carrier Aug 19 00:12:13.554809 systemd-networkd[1892]: Enumeration completed Aug 19 00:12:13.555226 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:12:13.560879 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 00:12:13.564317 systemd-networkd[1892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:13.564437 systemd-networkd[1892]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:12:13.569449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 00:12:13.572838 systemd-resolved[1894]: Positive Trust Anchors: Aug 19 00:12:13.572887 systemd-resolved[1894]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:12:13.572951 systemd-resolved[1894]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:12:13.575071 systemd-networkd[1892]: eth0: Link UP Aug 19 00:12:13.575480 systemd-networkd[1892]: eth0: Gained carrier Aug 19 00:12:13.575522 systemd-networkd[1892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:13.587150 systemd-networkd[1892]: eth0: DHCPv4 address 172.31.30.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 19 00:12:13.596024 systemd-resolved[1894]: Defaulting to hostname 'linux'. Aug 19 00:12:13.599523 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:12:13.602416 systemd[1]: Reached target network.target - Network. Aug 19 00:12:13.604554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:12:13.607270 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:12:13.609995 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 00:12:13.613128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 00:12:13.616485 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 00:12:13.619459 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 00:12:13.622490 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 00:12:13.625494 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 00:12:13.625556 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:12:13.627850 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:12:13.634065 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 00:12:13.639598 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 00:12:13.649204 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 00:12:13.652563 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 00:12:13.655822 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 00:12:13.668366 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 00:12:13.672221 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 00:12:13.677061 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 00:12:13.680564 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 00:12:13.684228 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:12:13.689477 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:12:13.692247 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:12:13.692494 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:12:13.694753 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 00:12:13.704338 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 19 00:12:13.717576 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 00:12:13.728472 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 00:12:13.737444 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 00:12:13.746456 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 00:12:13.749127 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 00:12:13.753001 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 00:12:13.767319 systemd[1]: Started ntpd.service - Network Time Service. Aug 19 00:12:13.775392 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 00:12:13.785322 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 19 00:12:13.792282 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 00:12:13.802443 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 00:12:13.816231 jq[1976]: false Aug 19 00:12:13.817098 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 00:12:13.823501 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 00:12:13.828555 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 00:12:13.835562 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 00:12:13.849321 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 00:12:13.871073 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 00:12:13.874636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 00:12:13.876264 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 00:12:13.921128 extend-filesystems[1977]: Found /dev/nvme0n1p6 Aug 19 00:12:13.957649 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 00:12:13.959115 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 00:12:13.974843 extend-filesystems[1977]: Found /dev/nvme0n1p9 Aug 19 00:12:14.009379 extend-filesystems[1977]: Checking size of /dev/nvme0n1p9 Aug 19 00:12:14.009491 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 00:12:14.012862 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 00:12:14.021578 jq[1988]: true Aug 19 00:12:14.028161 coreos-metadata[1973]: Aug 19 00:12:14.026 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 19 00:12:14.033433 coreos-metadata[1973]: Aug 19 00:12:14.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 19 00:12:14.036492 coreos-metadata[1973]: Aug 19 00:12:14.036 INFO Fetch successful Aug 19 00:12:14.036492 coreos-metadata[1973]: Aug 19 00:12:14.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 19 00:12:14.037647 coreos-metadata[1973]: Aug 19 00:12:14.037 INFO Fetch successful Aug 19 00:12:14.037647 coreos-metadata[1973]: Aug 19 00:12:14.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 19 00:12:14.039813 coreos-metadata[1973]: Aug 19 00:12:14.038 INFO Fetch successful Aug 19 00:12:14.039813 coreos-metadata[1973]: Aug 19 00:12:14.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 19 00:12:14.041225 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 00:12:14.047495 coreos-metadata[1973]: Aug 19 00:12:14.040 INFO Fetch successful Aug 19 00:12:14.047495 coreos-metadata[1973]: Aug 19 00:12:14.041 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 19 00:12:14.049119 coreos-metadata[1973]: Aug 19 00:12:14.048 INFO Fetch failed with 404: resource not found Aug 19 00:12:14.049119 coreos-metadata[1973]: Aug 19 00:12:14.048 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 19 00:12:14.059158 coreos-metadata[1973]: Aug 19 00:12:14.056 INFO Fetch successful Aug 19 00:12:14.059158 coreos-metadata[1973]: Aug 19 00:12:14.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 19 00:12:14.066924 coreos-metadata[1973]: Aug 19 00:12:14.065 INFO Fetch successful Aug 19 00:12:14.066924 coreos-metadata[1973]: Aug 19 00:12:14.065 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 19 00:12:14.068400 coreos-metadata[1973]: Aug 19 00:12:14.068 INFO Fetch successful Aug 19 00:12:14.068400 coreos-metadata[1973]: Aug 19 00:12:14.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 19 00:12:14.079645 coreos-metadata[1973]: Aug 19 00:12:14.074 INFO Fetch successful Aug 19 00:12:14.079645 coreos-metadata[1973]: Aug 19 00:12:14.074 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 19 00:12:14.075674 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 00:12:14.075322 dbus-daemon[1974]: [system] SELinux support is enabled Aug 19 00:12:14.085295 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 00:12:14.086090 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 00:12:14.095265 coreos-metadata[1973]: Aug 19 00:12:14.092 INFO Fetch successful Aug 19 00:12:14.089528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 00:12:14.089568 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 00:12:14.102983 extend-filesystems[1977]: Resized partition /dev/nvme0n1p9 Aug 19 00:12:14.109830 tar[2006]: linux-arm64/helm Aug 19 00:12:14.118770 extend-filesystems[2026]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 00:12:14.140264 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 19 00:12:14.150224 dbus-daemon[1974]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1892 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 19 00:12:14.168461 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 19 00:12:14.204007 jq[2019]: true Aug 19 00:12:14.243159 ntpd[1979]: ntpd 4.2.8p17@1.4004-o Mon Aug 18 21:29:50 UTC 2025 (1): Starting Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: ntpd 4.2.8p17@1.4004-o Mon Aug 18 21:29:50 UTC 2025 (1): Starting Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: ---------------------------------------------------- Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: ntp-4 is maintained by Network Time Foundation, Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: corporation. Support and training for ntp-4 are Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: available at https://www.nwtime.org/support Aug 19 00:12:14.246554 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: ---------------------------------------------------- Aug 19 00:12:14.243224 ntpd[1979]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 19 00:12:14.243272 ntpd[1979]: ---------------------------------------------------- Aug 19 00:12:14.243298 ntpd[1979]: ntp-4 is maintained by Network Time Foundation, Aug 19 00:12:14.243315 ntpd[1979]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 19 00:12:14.243331 ntpd[1979]: corporation. Support and training for ntp-4 are Aug 19 00:12:14.243349 ntpd[1979]: available at https://www.nwtime.org/support Aug 19 00:12:14.243366 ntpd[1979]: ---------------------------------------------------- Aug 19 00:12:14.264654 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 19 00:12:14.264758 update_engine[1987]: I20250819 00:12:14.253799 1987 main.cc:92] Flatcar Update Engine starting Aug 19 00:12:14.253523 ntpd[1979]: proto: precision = 0.108 usec (-23) Aug 19 00:12:14.265275 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: proto: precision = 0.108 usec (-23) Aug 19 00:12:14.265275 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: basedate set to 2025-08-06 Aug 19 00:12:14.265275 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: gps base set to 2025-08-10 (week 2379) Aug 19 00:12:14.256409 ntpd[1979]: basedate set to 2025-08-06 Aug 19 00:12:14.270204 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Listen and drop on 0 v6wildcard [::]:123 Aug 19 00:12:14.256446 ntpd[1979]: gps base set to 2025-08-10 (week 2379) Aug 19 00:12:14.268919 ntpd[1979]: Listen and drop on 0 v6wildcard [::]:123 Aug 19 00:12:14.285373 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 19 00:12:14.285373 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 00:12:14.285373 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 19 00:12:14.272860 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Listen normally on 2 lo 127.0.0.1:123 Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Listen normally on 3 eth0 172.31.30.10:123 Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Listen normally on 4 lo [::1]:123 Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: bind(21) AF_INET6 fe80::41d:5cff:fee1:e959%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: unable to create socket on eth0 (5) for fe80::41d:5cff:fee1:e959%2#123 Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: failed to init interface for address fe80::41d:5cff:fee1:e959%2 Aug 19 00:12:14.300656 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: Listening on routing socket on fd #21 for interface updates Aug 19 00:12:14.273297 ntpd[1979]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 19 00:12:14.302740 extend-filesystems[1977]: Resized filesystem in /dev/nvme0n1p9 Aug 19 00:12:14.276113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 00:12:14.274034 ntpd[1979]: Listen normally on 2 lo 127.0.0.1:123 Aug 19 00:12:14.297420 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 19 00:12:14.274115 ntpd[1979]: Listen normally on 3 eth0 172.31.30.10:123 Aug 19 00:12:14.327306 update_engine[1987]: I20250819 00:12:14.323647 1987 update_check_scheduler.cc:74] Next update check in 5m49s Aug 19 00:12:14.313371 systemd[1]: Started update-engine.service - Update Engine. Aug 19 00:12:14.274177 ntpd[1979]: Listen normally on 4 lo [::1]:123 Aug 19 00:12:14.274256 ntpd[1979]: bind(21) AF_INET6 fe80::41d:5cff:fee1:e959%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 00:12:14.274301 ntpd[1979]: unable to create socket on eth0 (5) for fe80::41d:5cff:fee1:e959%2#123 Aug 19 00:12:14.274327 ntpd[1979]: failed to init interface for address fe80::41d:5cff:fee1:e959%2 Aug 19 00:12:14.274381 ntpd[1979]: Listening on routing socket on fd #21 for interface updates Aug 19 00:12:14.346124 ntpd[1979]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:14.348907 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:14.348907 ntpd[1979]: 19 Aug 00:12:14 ntpd[1979]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:14.346194 ntpd[1979]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:14.391428 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 00:12:14.395955 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 19 00:12:14.402064 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 00:12:14.494311 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:12:14.499316 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 00:12:14.510285 systemd[1]: Starting sshkeys.service... Aug 19 00:12:14.611135 systemd-logind[1986]: Watching system buttons on /dev/input/event0 (Power Button) Aug 19 00:12:14.611197 systemd-logind[1986]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 19 00:12:14.615309 systemd-logind[1986]: New seat seat0. Aug 19 00:12:14.620933 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 00:12:14.636585 systemd-networkd[1892]: eth0: Gained IPv6LL Aug 19 00:12:14.667117 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 19 00:12:14.672835 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 00:12:14.679537 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 19 00:12:14.749632 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 00:12:14.753700 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 00:12:14.761735 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 19 00:12:14.771125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:14.779146 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 00:12:14.914142 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 19 00:12:14.935718 dbus-daemon[1974]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 19 00:12:14.942205 dbus-daemon[1974]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 19 00:12:14.960678 systemd[1]: Starting polkit.service - Authorization Manager... Aug 19 00:12:14.993456 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 00:12:15.161783 coreos-metadata[2083]: Aug 19 00:12:15.161 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 19 00:12:15.171200 coreos-metadata[2083]: Aug 19 00:12:15.171 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 19 00:12:15.175434 coreos-metadata[2083]: Aug 19 00:12:15.175 INFO Fetch successful Aug 19 00:12:15.175434 coreos-metadata[2083]: Aug 19 00:12:15.175 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 19 00:12:15.176549 coreos-metadata[2083]: Aug 19 00:12:15.176 INFO Fetch successful Aug 19 00:12:15.186322 unknown[2083]: wrote ssh authorized keys file for user: core Aug 19 00:12:15.300665 update-ssh-keys[2155]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:12:15.309660 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 19 00:12:15.324660 systemd[1]: Finished sshkeys.service. Aug 19 00:12:15.338721 amazon-ssm-agent[2104]: Initializing new seelog logger Aug 19 00:12:15.343996 amazon-ssm-agent[2104]: New Seelog Logger Creation Complete Aug 19 00:12:15.344599 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.344599 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.345007 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 processing appconfig overrides Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 processing appconfig overrides Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 processing appconfig overrides Aug 19 00:12:15.349998 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.3479 INFO Proxy environment variables: Aug 19 00:12:15.363504 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.363504 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:15.363709 amazon-ssm-agent[2104]: 2025/08/19 00:12:15 processing appconfig overrides Aug 19 00:12:15.453049 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.3479 INFO https_proxy: Aug 19 00:12:15.570519 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.3479 INFO http_proxy: Aug 19 00:12:15.591900 containerd[2014]: time="2025-08-19T00:12:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 00:12:15.596478 containerd[2014]: time="2025-08-19T00:12:15.596415171Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 00:12:15.633993 containerd[2014]: time="2025-08-19T00:12:15.633700828Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.772µs" Aug 19 00:12:15.634265 containerd[2014]: time="2025-08-19T00:12:15.634215604Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 00:12:15.634401 containerd[2014]: time="2025-08-19T00:12:15.634370608Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.634784908Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.634835596Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.634900324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635065552Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635095684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635509708Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635554708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635600788Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635623444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 00:12:15.636020 containerd[2014]: time="2025-08-19T00:12:15.635834380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 00:12:15.637155 containerd[2014]: time="2025-08-19T00:12:15.637090852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:12:15.637416 containerd[2014]: time="2025-08-19T00:12:15.637372132Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:12:15.637548 containerd[2014]: time="2025-08-19T00:12:15.637512472Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 00:12:15.637727 containerd[2014]: time="2025-08-19T00:12:15.637687732Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 00:12:15.638505 containerd[2014]: time="2025-08-19T00:12:15.638436292Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 00:12:15.638866 containerd[2014]: time="2025-08-19T00:12:15.638814520Z" level=info msg="metadata content store policy set" policy=shared Aug 19 00:12:15.645644 containerd[2014]: time="2025-08-19T00:12:15.645578284Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 00:12:15.646167 containerd[2014]: time="2025-08-19T00:12:15.646119016Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648082648Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648162712Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648193792Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648226648Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648256612Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648286276Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648315508Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648351868Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648376924Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648406996Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648671332Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648726832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648778768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 00:12:15.649563 containerd[2014]: time="2025-08-19T00:12:15.648808768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.648836500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.648862480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.648889732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.648917740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.648945388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.649033384Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.649065136Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.649455988Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 00:12:15.650319 containerd[2014]: time="2025-08-19T00:12:15.649496584Z" level=info msg="Start snapshots syncer" Aug 19 00:12:15.651410 containerd[2014]: time="2025-08-19T00:12:15.650816740Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 00:12:15.652176 containerd[2014]: time="2025-08-19T00:12:15.652071376Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653058892Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653236768Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653477908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653520484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653551996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653581720Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653610136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653638600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653671360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653729884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653758864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653788600Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653849128Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:12:15.653995 containerd[2014]: time="2025-08-19T00:12:15.653881720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:12:15.654803 containerd[2014]: time="2025-08-19T00:12:15.653903248Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:12:15.654803 containerd[2014]: time="2025-08-19T00:12:15.653927464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:12:15.654803 containerd[2014]: time="2025-08-19T00:12:15.653947672Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656280052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656434780Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656609500Z" level=info msg="runtime interface created" Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656629420Z" level=info msg="created NRI interface" Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656653300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656684620Z" level=info msg="Connect containerd service" Aug 19 00:12:15.656830 containerd[2014]: time="2025-08-19T00:12:15.656752888Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 00:12:15.662428 containerd[2014]: time="2025-08-19T00:12:15.662362948Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:12:15.685006 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.3479 INFO no_proxy: Aug 19 00:12:15.787331 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.3482 INFO Checking if agent identity type OnPrem can be assumed Aug 19 00:12:15.817356 locksmithd[2048]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 00:12:15.894428 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.3483 INFO Checking if agent identity type EC2 can be assumed Aug 19 00:12:15.971488 polkitd[2135]: Started polkitd version 126 Aug 19 00:12:15.991441 tar[2006]: linux-arm64/LICENSE Aug 19 00:12:15.991441 tar[2006]: linux-arm64/README.md Aug 19 00:12:15.995515 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6065 INFO Agent will take identity from EC2 Aug 19 00:12:16.010654 polkitd[2135]: Loading rules from directory /etc/polkit-1/rules.d Aug 19 00:12:16.011334 polkitd[2135]: Loading rules from directory /run/polkit-1/rules.d Aug 19 00:12:16.011416 polkitd[2135]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 19 00:12:16.014917 polkitd[2135]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 19 00:12:16.015788 polkitd[2135]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 19 00:12:16.016162 polkitd[2135]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 19 00:12:16.025080 polkitd[2135]: Finished loading, compiling and executing 2 rules Aug 19 00:12:16.025397 containerd[2014]: time="2025-08-19T00:12:16.025140661Z" level=info msg="Start subscribing containerd event" Aug 19 00:12:16.025397 containerd[2014]: time="2025-08-19T00:12:16.025295197Z" level=info msg="Start recovering state" Aug 19 00:12:16.025869 systemd[1]: Started polkit.service - Authorization Manager. Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030045361Z" level=info msg="Start event monitor" Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030123709Z" level=info msg="Start cni network conf syncer for default" Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030146233Z" level=info msg="Start streaming server" Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030202057Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030222157Z" level=info msg="runtime interface starting up..." Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030237409Z" level=info msg="starting plugins..." Aug 19 00:12:16.032514 containerd[2014]: time="2025-08-19T00:12:16.030301897Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 00:12:16.033657 containerd[2014]: time="2025-08-19T00:12:16.031956829Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 00:12:16.034093 dbus-daemon[1974]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 19 00:12:16.035703 sshd_keygen[2001]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 00:12:16.040277 containerd[2014]: time="2025-08-19T00:12:16.036218498Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 00:12:16.040277 containerd[2014]: time="2025-08-19T00:12:16.036390878Z" level=info msg="containerd successfully booted in 0.445195s" Aug 19 00:12:16.036554 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 00:12:16.041497 polkitd[2135]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 19 00:12:16.067623 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 00:12:16.094233 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6209 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Aug 19 00:12:16.100326 systemd-hostnamed[2030]: Hostname set to (transient) Aug 19 00:12:16.102101 systemd-resolved[1894]: System hostname changed to 'ip-172-31-30-10'. Aug 19 00:12:16.130501 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 00:12:16.139392 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 00:12:16.144613 systemd[1]: Started sshd@0-172.31.30.10:22-147.75.109.163:42468.service - OpenSSH per-connection server daemon (147.75.109.163:42468). Aug 19 00:12:16.189753 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 00:12:16.191086 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 00:12:16.195208 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6210 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 19 00:12:16.201556 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 00:12:16.253368 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 00:12:16.259455 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 00:12:16.269830 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 00:12:16.273083 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 00:12:16.298015 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6210 INFO [amazon-ssm-agent] Starting Core Agent Aug 19 00:12:16.395125 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6210 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Aug 19 00:12:16.484825 sshd[2229]: Accepted publickey for core from 147.75.109.163 port 42468 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:16.490071 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:16.495096 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6210 INFO [Registrar] Starting registrar module Aug 19 00:12:16.511840 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 00:12:16.516920 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 00:12:16.539305 amazon-ssm-agent[2104]: 2025/08/19 00:12:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:16.539305 amazon-ssm-agent[2104]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:16.541956 amazon-ssm-agent[2104]: 2025/08/19 00:12:16 processing appconfig overrides Aug 19 00:12:16.541643 systemd-logind[1986]: New session 1 of user core. Aug 19 00:12:16.567212 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 00:12:16.576004 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6382 INFO [EC2Identity] Checking disk for registration info Aug 19 00:12:16.576925 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6383 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Aug 19 00:12:16.577918 amazon-ssm-agent[2104]: 2025-08-19 00:12:15.6383 INFO [EC2Identity] Generating registration keypair Aug 19 00:12:16.577918 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.4966 INFO [EC2Identity] Checking write access before registering Aug 19 00:12:16.577918 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.4987 INFO [EC2Identity] Registering EC2 instance with Systems Manager Aug 19 00:12:16.577918 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5388 INFO [EC2Identity] EC2 registration was successful. Aug 19 00:12:16.577918 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5388 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Aug 19 00:12:16.578784 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5390 INFO [CredentialRefresher] credentialRefresher has started Aug 19 00:12:16.578784 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5390 INFO [CredentialRefresher] Starting credentials refresher loop Aug 19 00:12:16.578784 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5756 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 19 00:12:16.578784 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5759 INFO [CredentialRefresher] Credentials ready Aug 19 00:12:16.582520 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 00:12:16.594856 amazon-ssm-agent[2104]: 2025-08-19 00:12:16.5785 INFO [CredentialRefresher] Next credential rotation will be in 29.9999506105 minutes Aug 19 00:12:16.607886 (systemd)[2242]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 00:12:16.613589 systemd-logind[1986]: New session c1 of user core. Aug 19 00:12:16.916469 systemd[2242]: Queued start job for default target default.target. Aug 19 00:12:16.926351 systemd[2242]: Created slice app.slice - User Application Slice. Aug 19 00:12:16.926601 systemd[2242]: Reached target paths.target - Paths. Aug 19 00:12:16.926790 systemd[2242]: Reached target timers.target - Timers. Aug 19 00:12:16.929825 systemd[2242]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 00:12:16.962805 systemd[2242]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 00:12:16.963097 systemd[2242]: Reached target sockets.target - Sockets. Aug 19 00:12:16.963195 systemd[2242]: Reached target basic.target - Basic System. Aug 19 00:12:16.963278 systemd[2242]: Reached target default.target - Main User Target. Aug 19 00:12:16.963337 systemd[2242]: Startup finished in 335ms. Aug 19 00:12:16.963586 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 00:12:16.975286 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 00:12:17.142432 systemd[1]: Started sshd@1-172.31.30.10:22-147.75.109.163:42472.service - OpenSSH per-connection server daemon (147.75.109.163:42472). Aug 19 00:12:17.244290 ntpd[1979]: Listen normally on 6 eth0 [fe80::41d:5cff:fee1:e959%2]:123 Aug 19 00:12:17.244921 ntpd[1979]: 19 Aug 00:12:17 ntpd[1979]: Listen normally on 6 eth0 [fe80::41d:5cff:fee1:e959%2]:123 Aug 19 00:12:17.343739 sshd[2253]: Accepted publickey for core from 147.75.109.163 port 42472 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:17.347242 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:17.357082 systemd-logind[1986]: New session 2 of user core. Aug 19 00:12:17.367292 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 00:12:17.499960 sshd[2256]: Connection closed by 147.75.109.163 port 42472 Aug 19 00:12:17.501434 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:17.508385 systemd-logind[1986]: Session 2 logged out. Waiting for processes to exit. Aug 19 00:12:17.509771 systemd[1]: sshd@1-172.31.30.10:22-147.75.109.163:42472.service: Deactivated successfully. Aug 19 00:12:17.514652 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 00:12:17.518550 systemd-logind[1986]: Removed session 2. Aug 19 00:12:17.538491 systemd[1]: Started sshd@2-172.31.30.10:22-147.75.109.163:42478.service - OpenSSH per-connection server daemon (147.75.109.163:42478). Aug 19 00:12:17.612439 amazon-ssm-agent[2104]: 2025-08-19 00:12:17.6122 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 19 00:12:17.714502 amazon-ssm-agent[2104]: 2025-08-19 00:12:17.6174 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2267) started Aug 19 00:12:17.749087 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 42478 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:17.751132 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:17.770987 systemd-logind[1986]: New session 3 of user core. Aug 19 00:12:17.777294 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 00:12:17.814797 amazon-ssm-agent[2104]: 2025-08-19 00:12:17.6174 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 19 00:12:17.918889 sshd[2272]: Connection closed by 147.75.109.163 port 42478 Aug 19 00:12:17.918679 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:17.926161 systemd-logind[1986]: Session 3 logged out. Waiting for processes to exit. Aug 19 00:12:17.926434 systemd[1]: sshd@2-172.31.30.10:22-147.75.109.163:42478.service: Deactivated successfully. Aug 19 00:12:17.930484 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 00:12:17.939684 systemd-logind[1986]: Removed session 3. Aug 19 00:12:18.177469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:18.183285 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 00:12:18.189233 systemd[1]: Startup finished in 3.680s (kernel) + 11.967s (initrd) + 10.593s (userspace) = 26.241s. Aug 19 00:12:18.194354 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:19.721397 kubelet[2288]: E0819 00:12:19.721301 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:19.726156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:19.726464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:19.727148 systemd[1]: kubelet.service: Consumed 1.540s CPU time, 256.4M memory peak. Aug 19 00:12:21.412808 systemd-resolved[1894]: Clock change detected. Flushing caches. Aug 19 00:12:28.123504 systemd[1]: Started sshd@3-172.31.30.10:22-147.75.109.163:32984.service - OpenSSH per-connection server daemon (147.75.109.163:32984). Aug 19 00:12:28.322240 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 32984 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:28.324621 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:28.332623 systemd-logind[1986]: New session 4 of user core. Aug 19 00:12:28.342642 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 00:12:28.469532 sshd[2304]: Connection closed by 147.75.109.163 port 32984 Aug 19 00:12:28.470761 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:28.477611 systemd[1]: sshd@3-172.31.30.10:22-147.75.109.163:32984.service: Deactivated successfully. Aug 19 00:12:28.482160 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 00:12:28.484283 systemd-logind[1986]: Session 4 logged out. Waiting for processes to exit. Aug 19 00:12:28.487016 systemd-logind[1986]: Removed session 4. Aug 19 00:12:28.503733 systemd[1]: Started sshd@4-172.31.30.10:22-147.75.109.163:32988.service - OpenSSH per-connection server daemon (147.75.109.163:32988). Aug 19 00:12:28.695579 sshd[2310]: Accepted publickey for core from 147.75.109.163 port 32988 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:28.697862 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:28.707135 systemd-logind[1986]: New session 5 of user core. Aug 19 00:12:28.713628 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 00:12:28.828246 sshd[2313]: Connection closed by 147.75.109.163 port 32988 Aug 19 00:12:28.828045 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:28.835148 systemd-logind[1986]: Session 5 logged out. Waiting for processes to exit. Aug 19 00:12:28.835681 systemd[1]: sshd@4-172.31.30.10:22-147.75.109.163:32988.service: Deactivated successfully. Aug 19 00:12:28.838361 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 00:12:28.843914 systemd-logind[1986]: Removed session 5. Aug 19 00:12:28.861759 systemd[1]: Started sshd@5-172.31.30.10:22-147.75.109.163:32992.service - OpenSSH per-connection server daemon (147.75.109.163:32992). Aug 19 00:12:29.058295 sshd[2319]: Accepted publickey for core from 147.75.109.163 port 32992 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:29.060601 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:29.069486 systemd-logind[1986]: New session 6 of user core. Aug 19 00:12:29.079607 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 00:12:29.203115 sshd[2322]: Connection closed by 147.75.109.163 port 32992 Aug 19 00:12:29.203752 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:29.209843 systemd-logind[1986]: Session 6 logged out. Waiting for processes to exit. Aug 19 00:12:29.212084 systemd[1]: sshd@5-172.31.30.10:22-147.75.109.163:32992.service: Deactivated successfully. Aug 19 00:12:29.216498 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 00:12:29.220785 systemd-logind[1986]: Removed session 6. Aug 19 00:12:29.242255 systemd[1]: Started sshd@6-172.31.30.10:22-147.75.109.163:33002.service - OpenSSH per-connection server daemon (147.75.109.163:33002). Aug 19 00:12:29.447796 sshd[2328]: Accepted publickey for core from 147.75.109.163 port 33002 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:29.449784 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:29.459491 systemd-logind[1986]: New session 7 of user core. Aug 19 00:12:29.467649 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 00:12:29.606950 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 00:12:29.608150 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:29.628478 sudo[2332]: pam_unix(sudo:session): session closed for user root Aug 19 00:12:29.654425 sshd[2331]: Connection closed by 147.75.109.163 port 33002 Aug 19 00:12:29.653461 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:29.660890 systemd[1]: sshd@6-172.31.30.10:22-147.75.109.163:33002.service: Deactivated successfully. Aug 19 00:12:29.665471 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 00:12:29.667560 systemd-logind[1986]: Session 7 logged out. Waiting for processes to exit. Aug 19 00:12:29.671662 systemd-logind[1986]: Removed session 7. Aug 19 00:12:29.692346 systemd[1]: Started sshd@7-172.31.30.10:22-147.75.109.163:33004.service - OpenSSH per-connection server daemon (147.75.109.163:33004). Aug 19 00:12:29.888056 sshd[2338]: Accepted publickey for core from 147.75.109.163 port 33004 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:29.891014 sshd-session[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:29.895471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 00:12:29.899404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:29.905872 systemd-logind[1986]: New session 8 of user core. Aug 19 00:12:29.913016 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 00:12:30.024829 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 00:12:30.025618 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:30.037697 sudo[2346]: pam_unix(sudo:session): session closed for user root Aug 19 00:12:30.047808 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 00:12:30.048529 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:30.069938 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:12:30.150801 augenrules[2368]: No rules Aug 19 00:12:30.153566 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:12:30.154074 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:12:30.155813 sudo[2345]: pam_unix(sudo:session): session closed for user root Aug 19 00:12:30.181367 sshd[2344]: Connection closed by 147.75.109.163 port 33004 Aug 19 00:12:30.182171 sshd-session[2338]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:30.189273 systemd[1]: sshd@7-172.31.30.10:22-147.75.109.163:33004.service: Deactivated successfully. Aug 19 00:12:30.195902 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 00:12:30.198563 systemd-logind[1986]: Session 8 logged out. Waiting for processes to exit. Aug 19 00:12:30.201567 systemd-logind[1986]: Removed session 8. Aug 19 00:12:30.219880 systemd[1]: Started sshd@8-172.31.30.10:22-147.75.109.163:33014.service - OpenSSH per-connection server daemon (147.75.109.163:33014). Aug 19 00:12:30.267654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:30.284050 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:30.368453 kubelet[2384]: E0819 00:12:30.368260 2384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:30.376828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:30.377113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:30.378074 systemd[1]: kubelet.service: Consumed 323ms CPU time, 107.5M memory peak. Aug 19 00:12:30.419222 sshd[2377]: Accepted publickey for core from 147.75.109.163 port 33014 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:30.422258 sshd-session[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:30.429877 systemd-logind[1986]: New session 9 of user core. Aug 19 00:12:30.450620 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 00:12:30.553596 sudo[2392]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 00:12:30.554204 sudo[2392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:31.210452 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 00:12:31.239124 (dockerd)[2409]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 00:12:31.765666 dockerd[2409]: time="2025-08-19T00:12:31.764440695Z" level=info msg="Starting up" Aug 19 00:12:31.769767 dockerd[2409]: time="2025-08-19T00:12:31.769697619Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 00:12:31.789249 dockerd[2409]: time="2025-08-19T00:12:31.789185259Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 00:12:31.813537 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4227404320-merged.mount: Deactivated successfully. Aug 19 00:12:31.848415 dockerd[2409]: time="2025-08-19T00:12:31.848300943Z" level=info msg="Loading containers: start." Aug 19 00:12:31.874485 kernel: Initializing XFRM netlink socket Aug 19 00:12:32.204295 (udev-worker)[2431]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:12:32.280845 systemd-networkd[1892]: docker0: Link UP Aug 19 00:12:32.286319 dockerd[2409]: time="2025-08-19T00:12:32.286205666Z" level=info msg="Loading containers: done." Aug 19 00:12:32.314391 dockerd[2409]: time="2025-08-19T00:12:32.314294234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 00:12:32.314588 dockerd[2409]: time="2025-08-19T00:12:32.314434634Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 00:12:32.314648 dockerd[2409]: time="2025-08-19T00:12:32.314588738Z" level=info msg="Initializing buildkit" Aug 19 00:12:32.352768 dockerd[2409]: time="2025-08-19T00:12:32.352692542Z" level=info msg="Completed buildkit initialization" Aug 19 00:12:32.367353 dockerd[2409]: time="2025-08-19T00:12:32.367245698Z" level=info msg="Daemon has completed initialization" Aug 19 00:12:32.367832 dockerd[2409]: time="2025-08-19T00:12:32.367535558Z" level=info msg="API listen on /run/docker.sock" Aug 19 00:12:32.368535 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 00:12:32.809700 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck982474179-merged.mount: Deactivated successfully. Aug 19 00:12:33.676300 containerd[2014]: time="2025-08-19T00:12:33.676041460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Aug 19 00:12:34.211313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682439456.mount: Deactivated successfully. Aug 19 00:12:35.516984 containerd[2014]: time="2025-08-19T00:12:35.516892710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:35.519717 containerd[2014]: time="2025-08-19T00:12:35.519649398Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652441" Aug 19 00:12:35.521461 containerd[2014]: time="2025-08-19T00:12:35.521396262Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:35.525673 containerd[2014]: time="2025-08-19T00:12:35.525594354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:35.528088 containerd[2014]: time="2025-08-19T00:12:35.527433522Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.850459278s" Aug 19 00:12:35.528088 containerd[2014]: time="2025-08-19T00:12:35.527495310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Aug 19 00:12:35.529658 containerd[2014]: time="2025-08-19T00:12:35.529597302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Aug 19 00:12:37.381816 containerd[2014]: time="2025-08-19T00:12:37.381728731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:37.383886 containerd[2014]: time="2025-08-19T00:12:37.383827063Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460309" Aug 19 00:12:37.385212 containerd[2014]: time="2025-08-19T00:12:37.385159963Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:37.389744 containerd[2014]: time="2025-08-19T00:12:37.389691979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:37.391747 containerd[2014]: time="2025-08-19T00:12:37.391687459Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.861873245s" Aug 19 00:12:37.391860 containerd[2014]: time="2025-08-19T00:12:37.391744075Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Aug 19 00:12:37.392690 containerd[2014]: time="2025-08-19T00:12:37.392640151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Aug 19 00:12:38.950415 containerd[2014]: time="2025-08-19T00:12:38.949872983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:38.952439 containerd[2014]: time="2025-08-19T00:12:38.952368935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125903" Aug 19 00:12:38.954992 containerd[2014]: time="2025-08-19T00:12:38.954915095Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:38.960234 containerd[2014]: time="2025-08-19T00:12:38.960135191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:38.962220 containerd[2014]: time="2025-08-19T00:12:38.962016575Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.5693191s" Aug 19 00:12:38.962220 containerd[2014]: time="2025-08-19T00:12:38.962074655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Aug 19 00:12:38.963262 containerd[2014]: time="2025-08-19T00:12:38.962906243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Aug 19 00:12:40.290704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410699604.mount: Deactivated successfully. Aug 19 00:12:40.628639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 00:12:40.633735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:41.003678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:41.010456 containerd[2014]: time="2025-08-19T00:12:41.010358889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:41.013146 containerd[2014]: time="2025-08-19T00:12:41.013079145Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Aug 19 00:12:41.014801 containerd[2014]: time="2025-08-19T00:12:41.014528505Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:41.017344 containerd[2014]: time="2025-08-19T00:12:41.017279217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:41.018785 containerd[2014]: time="2025-08-19T00:12:41.018743421Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 2.055790882s" Aug 19 00:12:41.018931 containerd[2014]: time="2025-08-19T00:12:41.018903417Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Aug 19 00:12:41.020212 containerd[2014]: time="2025-08-19T00:12:41.019834773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 00:12:41.020578 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:41.094938 kubelet[2697]: E0819 00:12:41.094850 2697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:41.099131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:41.099475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:41.101514 systemd[1]: kubelet.service: Consumed 300ms CPU time, 105.1M memory peak. Aug 19 00:12:41.528469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746383183.mount: Deactivated successfully. Aug 19 00:12:42.693912 containerd[2014]: time="2025-08-19T00:12:42.693855169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:42.695747 containerd[2014]: time="2025-08-19T00:12:42.695701729Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 19 00:12:42.696958 containerd[2014]: time="2025-08-19T00:12:42.696914737Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:42.702750 containerd[2014]: time="2025-08-19T00:12:42.702699829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:42.704830 containerd[2014]: time="2025-08-19T00:12:42.704767009Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.684880144s" Aug 19 00:12:42.704948 containerd[2014]: time="2025-08-19T00:12:42.704828869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 19 00:12:42.705642 containerd[2014]: time="2025-08-19T00:12:42.705589441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 00:12:43.187055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442815486.mount: Deactivated successfully. Aug 19 00:12:43.202441 containerd[2014]: time="2025-08-19T00:12:43.202040100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:12:43.203923 containerd[2014]: time="2025-08-19T00:12:43.203848260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 19 00:12:43.208401 containerd[2014]: time="2025-08-19T00:12:43.206723712Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:12:43.217014 containerd[2014]: time="2025-08-19T00:12:43.216954864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:12:43.218191 containerd[2014]: time="2025-08-19T00:12:43.218148876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.500815ms" Aug 19 00:12:43.218390 containerd[2014]: time="2025-08-19T00:12:43.218343288Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 19 00:12:43.219970 containerd[2014]: time="2025-08-19T00:12:43.219868236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 19 00:12:43.823012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222548707.mount: Deactivated successfully. Aug 19 00:12:45.806411 containerd[2014]: time="2025-08-19T00:12:45.806072309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:45.808984 containerd[2014]: time="2025-08-19T00:12:45.808907129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Aug 19 00:12:45.811555 containerd[2014]: time="2025-08-19T00:12:45.811484057Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:45.816224 containerd[2014]: time="2025-08-19T00:12:45.816131741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:45.820242 containerd[2014]: time="2025-08-19T00:12:45.820011509Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.599799641s" Aug 19 00:12:45.820242 containerd[2014]: time="2025-08-19T00:12:45.820075673Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 19 00:12:46.278763 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 19 00:12:51.117924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 19 00:12:51.122512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:51.461613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:51.476248 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:51.554967 kubelet[2843]: E0819 00:12:51.554906 2843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:51.559734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:51.560210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:51.561242 systemd[1]: kubelet.service: Consumed 285ms CPU time, 105.2M memory peak. Aug 19 00:12:53.188453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:53.188774 systemd[1]: kubelet.service: Consumed 285ms CPU time, 105.2M memory peak. Aug 19 00:12:53.192799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:53.245010 systemd[1]: Reload requested from client PID 2857 ('systemctl') (unit session-9.scope)... Aug 19 00:12:53.245217 systemd[1]: Reloading... Aug 19 00:12:53.488428 zram_generator::config[2905]: No configuration found. Aug 19 00:12:53.942032 systemd[1]: Reloading finished in 696 ms. Aug 19 00:12:54.049693 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 00:12:54.050057 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 00:12:54.050826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:54.050901 systemd[1]: kubelet.service: Consumed 214ms CPU time, 95M memory peak. Aug 19 00:12:54.055709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:54.384142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:54.398892 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:12:54.470691 kubelet[2966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:12:54.471157 kubelet[2966]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 00:12:54.471239 kubelet[2966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:12:54.471521 kubelet[2966]: I0819 00:12:54.471467 2966 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:12:55.151407 kubelet[2966]: I0819 00:12:55.151166 2966 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 00:12:55.151407 kubelet[2966]: I0819 00:12:55.151221 2966 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:12:55.152516 kubelet[2966]: I0819 00:12:55.152471 2966 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 00:12:55.228875 kubelet[2966]: E0819 00:12:55.228825 2966 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:55.230696 kubelet[2966]: I0819 00:12:55.230450 2966 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:12:55.246305 kubelet[2966]: I0819 00:12:55.246264 2966 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:12:55.253717 kubelet[2966]: I0819 00:12:55.253680 2966 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:12:55.254467 kubelet[2966]: I0819 00:12:55.254443 2966 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 00:12:55.254856 kubelet[2966]: I0819 00:12:55.254813 2966 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:12:55.255219 kubelet[2966]: I0819 00:12:55.254940 2966 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:12:55.255610 kubelet[2966]: I0819 00:12:55.255588 2966 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:12:55.255701 kubelet[2966]: I0819 00:12:55.255683 2966 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 00:12:55.256093 kubelet[2966]: I0819 00:12:55.256074 2966 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:12:55.261182 kubelet[2966]: I0819 00:12:55.261149 2966 kubelet.go:408] "Attempting to sync node with API server" Aug 19 00:12:55.261348 kubelet[2966]: I0819 00:12:55.261329 2966 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:12:55.261501 kubelet[2966]: I0819 00:12:55.261483 2966 kubelet.go:314] "Adding apiserver pod source" Aug 19 00:12:55.261618 kubelet[2966]: I0819 00:12:55.261599 2966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:12:55.274390 kubelet[2966]: W0819 00:12:55.274271 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-10&limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:55.275154 kubelet[2966]: E0819 00:12:55.274406 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-10&limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:55.277061 kubelet[2966]: W0819 00:12:55.276973 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:55.278561 kubelet[2966]: I0819 00:12:55.278127 2966 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:12:55.279834 kubelet[2966]: E0819 00:12:55.277249 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:55.280175 kubelet[2966]: I0819 00:12:55.280006 2966 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:12:55.280433 kubelet[2966]: W0819 00:12:55.280397 2966 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 00:12:55.283561 kubelet[2966]: I0819 00:12:55.283514 2966 server.go:1274] "Started kubelet" Aug 19 00:12:55.285902 kubelet[2966]: I0819 00:12:55.285840 2966 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:12:55.289214 kubelet[2966]: I0819 00:12:55.289177 2966 server.go:449] "Adding debug handlers to kubelet server" Aug 19 00:12:55.292600 kubelet[2966]: I0819 00:12:55.292507 2966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:12:55.293009 kubelet[2966]: I0819 00:12:55.292962 2966 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:12:55.296298 kubelet[2966]: E0819 00:12:55.293815 2966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-10.185d02adcfcffd74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-10,UID:ip-172-31-30-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-10,},FirstTimestamp:2025-08-19 00:12:55.28347378 +0000 UTC m=+0.878233518,LastTimestamp:2025-08-19 00:12:55.28347378 +0000 UTC m=+0.878233518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-10,}" Aug 19 00:12:55.296688 kubelet[2966]: I0819 00:12:55.296664 2966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:12:55.298060 kubelet[2966]: I0819 00:12:55.297530 2966 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:12:55.303265 kubelet[2966]: I0819 00:12:55.303048 2966 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 00:12:55.303265 kubelet[2966]: I0819 00:12:55.303264 2966 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 00:12:55.303494 kubelet[2966]: I0819 00:12:55.303359 2966 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:12:55.304477 kubelet[2966]: W0819 00:12:55.304361 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:55.304610 kubelet[2966]: E0819 00:12:55.304474 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:55.304610 kubelet[2966]: E0819 00:12:55.304591 2966 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-10\" not found" Aug 19 00:12:55.305231 kubelet[2966]: E0819 00:12:55.305178 2966 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:12:55.307148 kubelet[2966]: I0819 00:12:55.305811 2966 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:12:55.307148 kubelet[2966]: I0819 00:12:55.305966 2966 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:12:55.307148 kubelet[2966]: E0819 00:12:55.307024 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-10?timeout=10s\": dial tcp 172.31.30.10:6443: connect: connection refused" interval="200ms" Aug 19 00:12:55.309056 kubelet[2966]: I0819 00:12:55.309023 2966 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:12:55.330327 kubelet[2966]: I0819 00:12:55.330282 2966 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 00:12:55.330518 kubelet[2966]: I0819 00:12:55.330495 2966 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 00:12:55.330671 kubelet[2966]: I0819 00:12:55.330653 2966 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:12:55.333840 kubelet[2966]: I0819 00:12:55.333799 2966 policy_none.go:49] "None policy: Start" Aug 19 00:12:55.340404 kubelet[2966]: I0819 00:12:55.339495 2966 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 00:12:55.340404 kubelet[2966]: I0819 00:12:55.339541 2966 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:12:55.345321 kubelet[2966]: I0819 00:12:55.345251 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:12:55.348652 kubelet[2966]: I0819 00:12:55.348617 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:12:55.348969 kubelet[2966]: I0819 00:12:55.348936 2966 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 00:12:55.349041 kubelet[2966]: I0819 00:12:55.348979 2966 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 00:12:55.349096 kubelet[2966]: E0819 00:12:55.349047 2966 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:12:55.350364 kubelet[2966]: W0819 00:12:55.350298 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:55.351935 kubelet[2966]: E0819 00:12:55.351896 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:55.361057 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 00:12:55.376916 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 00:12:55.384317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 00:12:55.406434 kubelet[2966]: E0819 00:12:55.404735 2966 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-10\" not found" Aug 19 00:12:55.406434 kubelet[2966]: I0819 00:12:55.405048 2966 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:12:55.406434 kubelet[2966]: I0819 00:12:55.405310 2966 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:12:55.406434 kubelet[2966]: I0819 00:12:55.405328 2966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:12:55.409423 kubelet[2966]: I0819 00:12:55.409202 2966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:12:55.412078 kubelet[2966]: E0819 00:12:55.412010 2966 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-10\" not found" Aug 19 00:12:55.470826 systemd[1]: Created slice kubepods-burstable-pod5b5a1789e85ec8bf7e45a361860db75f.slice - libcontainer container kubepods-burstable-pod5b5a1789e85ec8bf7e45a361860db75f.slice. Aug 19 00:12:55.498984 systemd[1]: Created slice kubepods-burstable-pod8e28acbfdbde39a9b6fcb734fccac03e.slice - libcontainer container kubepods-burstable-pod8e28acbfdbde39a9b6fcb734fccac03e.slice. Aug 19 00:12:55.504957 kubelet[2966]: I0819 00:12:55.504894 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b5a1789e85ec8bf7e45a361860db75f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-10\" (UID: \"5b5a1789e85ec8bf7e45a361860db75f\") " pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:12:55.505485 kubelet[2966]: I0819 00:12:55.504966 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b5a1789e85ec8bf7e45a361860db75f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-10\" (UID: \"5b5a1789e85ec8bf7e45a361860db75f\") " pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:12:55.505485 kubelet[2966]: I0819 00:12:55.505009 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:12:55.505485 kubelet[2966]: I0819 00:12:55.505045 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:12:55.505485 kubelet[2966]: I0819 00:12:55.505080 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b5a1789e85ec8bf7e45a361860db75f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-10\" (UID: \"5b5a1789e85ec8bf7e45a361860db75f\") " pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:12:55.505485 kubelet[2966]: I0819 00:12:55.505111 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:12:55.505742 kubelet[2966]: I0819 00:12:55.505144 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:12:55.505742 kubelet[2966]: I0819 00:12:55.505178 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:12:55.505742 kubelet[2966]: I0819 00:12:55.505225 2966 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6fd3013d43bff7c03fd2525661f9215-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-10\" (UID: \"a6fd3013d43bff7c03fd2525661f9215\") " pod="kube-system/kube-scheduler-ip-172-31-30-10" Aug 19 00:12:55.509037 systemd[1]: Created slice kubepods-burstable-poda6fd3013d43bff7c03fd2525661f9215.slice - libcontainer container kubepods-burstable-poda6fd3013d43bff7c03fd2525661f9215.slice. Aug 19 00:12:55.510281 kubelet[2966]: E0819 00:12:55.507956 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-10?timeout=10s\": dial tcp 172.31.30.10:6443: connect: connection refused" interval="400ms" Aug 19 00:12:55.511776 kubelet[2966]: I0819 00:12:55.511720 2966 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-10" Aug 19 00:12:55.513643 kubelet[2966]: E0819 00:12:55.513577 2966 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.10:6443/api/v1/nodes\": dial tcp 172.31.30.10:6443: connect: connection refused" node="ip-172-31-30-10" Aug 19 00:12:55.717913 kubelet[2966]: I0819 00:12:55.717791 2966 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-10" Aug 19 00:12:55.718432 kubelet[2966]: E0819 00:12:55.718243 2966 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.10:6443/api/v1/nodes\": dial tcp 172.31.30.10:6443: connect: connection refused" node="ip-172-31-30-10" Aug 19 00:12:55.803569 containerd[2014]: time="2025-08-19T00:12:55.801343634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-10,Uid:5b5a1789e85ec8bf7e45a361860db75f,Namespace:kube-system,Attempt:0,}" Aug 19 00:12:55.810736 containerd[2014]: time="2025-08-19T00:12:55.810671558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-10,Uid:8e28acbfdbde39a9b6fcb734fccac03e,Namespace:kube-system,Attempt:0,}" Aug 19 00:12:55.815114 containerd[2014]: time="2025-08-19T00:12:55.815043782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-10,Uid:a6fd3013d43bff7c03fd2525661f9215,Namespace:kube-system,Attempt:0,}" Aug 19 00:12:55.911169 kubelet[2966]: E0819 00:12:55.911072 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-10?timeout=10s\": dial tcp 172.31.30.10:6443: connect: connection refused" interval="800ms" Aug 19 00:12:56.136770 kubelet[2966]: I0819 00:12:56.136349 2966 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-10" Aug 19 00:12:56.136944 kubelet[2966]: E0819 00:12:56.136845 2966 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.10:6443/api/v1/nodes\": dial tcp 172.31.30.10:6443: connect: connection refused" node="ip-172-31-30-10" Aug 19 00:12:56.157695 containerd[2014]: time="2025-08-19T00:12:56.157619364Z" level=info msg="connecting to shim 6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f" address="unix:///run/containerd/s/1c97eb2a32f82cb40b7c0b5d3745392b332c7d2e4ca590fa24d592b78d858db4" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:12:56.160818 containerd[2014]: time="2025-08-19T00:12:56.160733244Z" level=info msg="connecting to shim ed5200e82150213986030627ec8ef2364cdce9fe828686ee83acacaeeaa20610" address="unix:///run/containerd/s/fc054724fbc7c85d50374411f603703f443ae7bdfa9195c77fd661565703ed3b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:12:56.161676 containerd[2014]: time="2025-08-19T00:12:56.161617692Z" level=info msg="connecting to shim cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4" address="unix:///run/containerd/s/1f9043574ed62f612ace760f655f26be70c5d9a2fb7bce2a3116b519192e27e6" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:12:56.242987 systemd[1]: Started cri-containerd-ed5200e82150213986030627ec8ef2364cdce9fe828686ee83acacaeeaa20610.scope - libcontainer container ed5200e82150213986030627ec8ef2364cdce9fe828686ee83acacaeeaa20610. Aug 19 00:12:56.258220 systemd[1]: Started cri-containerd-6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f.scope - libcontainer container 6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f. Aug 19 00:12:56.270595 systemd[1]: Started cri-containerd-cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4.scope - libcontainer container cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4. Aug 19 00:12:56.363213 containerd[2014]: time="2025-08-19T00:12:56.363162769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-10,Uid:5b5a1789e85ec8bf7e45a361860db75f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed5200e82150213986030627ec8ef2364cdce9fe828686ee83acacaeeaa20610\"" Aug 19 00:12:56.378261 containerd[2014]: time="2025-08-19T00:12:56.376964713Z" level=info msg="CreateContainer within sandbox \"ed5200e82150213986030627ec8ef2364cdce9fe828686ee83acacaeeaa20610\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 00:12:56.410076 containerd[2014]: time="2025-08-19T00:12:56.409931605Z" level=info msg="Container 11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:12:56.424589 kubelet[2966]: W0819 00:12:56.424445 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:56.424589 kubelet[2966]: E0819 00:12:56.424543 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:56.432162 containerd[2014]: time="2025-08-19T00:12:56.432106093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-10,Uid:a6fd3013d43bff7c03fd2525661f9215,Namespace:kube-system,Attempt:0,} returns sandbox id \"6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f\"" Aug 19 00:12:56.436546 containerd[2014]: time="2025-08-19T00:12:56.436456153Z" level=info msg="CreateContainer within sandbox \"ed5200e82150213986030627ec8ef2364cdce9fe828686ee83acacaeeaa20610\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1\"" Aug 19 00:12:56.437904 containerd[2014]: time="2025-08-19T00:12:56.437357221Z" level=info msg="StartContainer for \"11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1\"" Aug 19 00:12:56.438597 containerd[2014]: time="2025-08-19T00:12:56.437698669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-10,Uid:8e28acbfdbde39a9b6fcb734fccac03e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4\"" Aug 19 00:12:56.441468 containerd[2014]: time="2025-08-19T00:12:56.441416581Z" level=info msg="connecting to shim 11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1" address="unix:///run/containerd/s/fc054724fbc7c85d50374411f603703f443ae7bdfa9195c77fd661565703ed3b" protocol=ttrpc version=3 Aug 19 00:12:56.443626 containerd[2014]: time="2025-08-19T00:12:56.443536201Z" level=info msg="CreateContainer within sandbox \"cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 00:12:56.445505 containerd[2014]: time="2025-08-19T00:12:56.445368074Z" level=info msg="CreateContainer within sandbox \"6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 00:12:56.449064 kubelet[2966]: W0819 00:12:56.448935 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:56.449064 kubelet[2966]: E0819 00:12:56.449045 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:56.459661 containerd[2014]: time="2025-08-19T00:12:56.459435338Z" level=info msg="Container f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:12:56.480097 containerd[2014]: time="2025-08-19T00:12:56.480019010Z" level=info msg="CreateContainer within sandbox \"6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\"" Aug 19 00:12:56.481091 containerd[2014]: time="2025-08-19T00:12:56.480985070Z" level=info msg="StartContainer for \"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\"" Aug 19 00:12:56.483306 containerd[2014]: time="2025-08-19T00:12:56.483194990Z" level=info msg="connecting to shim f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d" address="unix:///run/containerd/s/1c97eb2a32f82cb40b7c0b5d3745392b332c7d2e4ca590fa24d592b78d858db4" protocol=ttrpc version=3 Aug 19 00:12:56.483912 systemd[1]: Started cri-containerd-11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1.scope - libcontainer container 11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1. Aug 19 00:12:56.490436 containerd[2014]: time="2025-08-19T00:12:56.489646610Z" level=info msg="Container 5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:12:56.507745 containerd[2014]: time="2025-08-19T00:12:56.507686822Z" level=info msg="CreateContainer within sandbox \"cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\"" Aug 19 00:12:56.509507 containerd[2014]: time="2025-08-19T00:12:56.509457482Z" level=info msg="StartContainer for \"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\"" Aug 19 00:12:56.512939 containerd[2014]: time="2025-08-19T00:12:56.512863142Z" level=info msg="connecting to shim 5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620" address="unix:///run/containerd/s/1f9043574ed62f612ace760f655f26be70c5d9a2fb7bce2a3116b519192e27e6" protocol=ttrpc version=3 Aug 19 00:12:56.549716 systemd[1]: Started cri-containerd-f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d.scope - libcontainer container f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d. Aug 19 00:12:56.581773 systemd[1]: Started cri-containerd-5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620.scope - libcontainer container 5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620. Aug 19 00:12:56.626167 containerd[2014]: time="2025-08-19T00:12:56.626118230Z" level=info msg="StartContainer for \"11f9b9d2d498cdaa65f8d9a73bd8de078f80430cc1e8ed97b4303d101fed8ef1\" returns successfully" Aug 19 00:12:56.677468 kubelet[2966]: W0819 00:12:56.676108 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:56.677468 kubelet[2966]: E0819 00:12:56.676214 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:56.710401 containerd[2014]: time="2025-08-19T00:12:56.709968699Z" level=info msg="StartContainer for \"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\" returns successfully" Aug 19 00:12:56.716684 kubelet[2966]: E0819 00:12:56.716601 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-10?timeout=10s\": dial tcp 172.31.30.10:6443: connect: connection refused" interval="1.6s" Aug 19 00:12:56.758129 containerd[2014]: time="2025-08-19T00:12:56.758054007Z" level=info msg="StartContainer for \"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\" returns successfully" Aug 19 00:12:56.824513 kubelet[2966]: W0819 00:12:56.824413 2966 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-10&limit=500&resourceVersion=0": dial tcp 172.31.30.10:6443: connect: connection refused Aug 19 00:12:56.824672 kubelet[2966]: E0819 00:12:56.824542 2966 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-10&limit=500&resourceVersion=0\": dial tcp 172.31.30.10:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:12:56.942366 kubelet[2966]: I0819 00:12:56.941443 2966 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-10" Aug 19 00:13:00.121622 update_engine[1987]: I20250819 00:13:00.119503 1987 update_attempter.cc:509] Updating boot flags... Aug 19 00:13:01.280780 kubelet[2966]: I0819 00:13:01.280724 2966 apiserver.go:52] "Watching apiserver" Aug 19 00:13:01.324153 kubelet[2966]: E0819 00:13:01.324094 2966 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-10\" not found" node="ip-172-31-30-10" Aug 19 00:13:01.327590 kubelet[2966]: I0819 00:13:01.327527 2966 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-10" Aug 19 00:13:01.327590 kubelet[2966]: E0819 00:13:01.327587 2966 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-10\": node \"ip-172-31-30-10\" not found" Aug 19 00:13:01.397999 kubelet[2966]: E0819 00:13:01.397831 2966 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-10.185d02adcfcffd74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-10,UID:ip-172-31-30-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-10,},FirstTimestamp:2025-08-19 00:12:55.28347378 +0000 UTC m=+0.878233518,LastTimestamp:2025-08-19 00:12:55.28347378 +0000 UTC m=+0.878233518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-10,}" Aug 19 00:13:01.404050 kubelet[2966]: I0819 00:13:01.403965 2966 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 00:13:01.827499 kubelet[2966]: E0819 00:13:01.827436 2966 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:13:03.389482 systemd[1]: Reload requested from client PID 3334 ('systemctl') (unit session-9.scope)... Aug 19 00:13:03.389512 systemd[1]: Reloading... Aug 19 00:13:03.570663 zram_generator::config[3378]: No configuration found. Aug 19 00:13:04.093602 systemd[1]: Reloading finished in 703 ms. Aug 19 00:13:04.155881 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:13:04.165988 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 00:13:04.166432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:13:04.166501 systemd[1]: kubelet.service: Consumed 1.623s CPU time, 127.1M memory peak. Aug 19 00:13:04.171807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:13:04.508105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:13:04.520920 (kubelet)[3438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:13:04.602412 kubelet[3438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:13:04.602412 kubelet[3438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 00:13:04.602412 kubelet[3438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:13:04.602412 kubelet[3438]: I0819 00:13:04.601946 3438 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:13:04.618040 kubelet[3438]: I0819 00:13:04.617996 3438 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 00:13:04.618236 kubelet[3438]: I0819 00:13:04.618214 3438 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:13:04.619222 kubelet[3438]: I0819 00:13:04.618919 3438 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 00:13:04.622114 kubelet[3438]: I0819 00:13:04.621955 3438 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 00:13:04.625877 kubelet[3438]: I0819 00:13:04.625834 3438 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:13:04.639144 kubelet[3438]: I0819 00:13:04.639058 3438 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:13:04.648288 kubelet[3438]: I0819 00:13:04.648205 3438 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:13:04.651483 kubelet[3438]: I0819 00:13:04.651408 3438 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 00:13:04.651807 kubelet[3438]: I0819 00:13:04.651704 3438 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:13:04.652740 kubelet[3438]: I0819 00:13:04.651767 3438 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:13:04.652740 kubelet[3438]: I0819 00:13:04.652737 3438 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:13:04.653012 kubelet[3438]: I0819 00:13:04.652762 3438 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 00:13:04.653012 kubelet[3438]: I0819 00:13:04.652834 3438 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:13:04.653113 kubelet[3438]: I0819 00:13:04.653022 3438 kubelet.go:408] "Attempting to sync node with API server" Aug 19 00:13:04.653113 kubelet[3438]: I0819 00:13:04.653046 3438 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:13:04.653113 kubelet[3438]: I0819 00:13:04.653081 3438 kubelet.go:314] "Adding apiserver pod source" Aug 19 00:13:04.654417 kubelet[3438]: I0819 00:13:04.654337 3438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:13:04.658976 kubelet[3438]: I0819 00:13:04.658834 3438 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:13:04.659820 kubelet[3438]: I0819 00:13:04.659795 3438 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:13:04.661008 kubelet[3438]: I0819 00:13:04.660621 3438 server.go:1274] "Started kubelet" Aug 19 00:13:04.664975 kubelet[3438]: I0819 00:13:04.664913 3438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:13:04.675355 kubelet[3438]: I0819 00:13:04.675293 3438 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:13:04.678259 kubelet[3438]: I0819 00:13:04.677113 3438 server.go:449] "Adding debug handlers to kubelet server" Aug 19 00:13:04.681301 kubelet[3438]: I0819 00:13:04.681206 3438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:13:04.681974 kubelet[3438]: I0819 00:13:04.681951 3438 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:13:04.682637 kubelet[3438]: I0819 00:13:04.682612 3438 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:13:04.685852 kubelet[3438]: I0819 00:13:04.685823 3438 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 00:13:04.686304 kubelet[3438]: E0819 00:13:04.686270 3438 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-10\" not found" Aug 19 00:13:04.691338 kubelet[3438]: I0819 00:13:04.691304 3438 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 00:13:04.691766 kubelet[3438]: I0819 00:13:04.691744 3438 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:13:04.695158 kubelet[3438]: I0819 00:13:04.695100 3438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:13:04.698470 kubelet[3438]: I0819 00:13:04.697827 3438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:13:04.698470 kubelet[3438]: I0819 00:13:04.697869 3438 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 00:13:04.698470 kubelet[3438]: I0819 00:13:04.697899 3438 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 00:13:04.698470 kubelet[3438]: E0819 00:13:04.697979 3438 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:13:04.698470 kubelet[3438]: I0819 00:13:04.698290 3438 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:13:04.701800 kubelet[3438]: I0819 00:13:04.701732 3438 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:13:04.764972 kubelet[3438]: I0819 00:13:04.763662 3438 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:13:04.776292 kubelet[3438]: E0819 00:13:04.776107 3438 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:13:04.798078 kubelet[3438]: E0819 00:13:04.798033 3438 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 00:13:04.943174 kubelet[3438]: I0819 00:13:04.941683 3438 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 00:13:04.943174 kubelet[3438]: I0819 00:13:04.941717 3438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 00:13:04.943174 kubelet[3438]: I0819 00:13:04.941754 3438 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:13:04.943174 kubelet[3438]: I0819 00:13:04.942003 3438 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 00:13:04.943174 kubelet[3438]: I0819 00:13:04.942022 3438 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 00:13:04.943174 kubelet[3438]: I0819 00:13:04.942055 3438 policy_none.go:49] "None policy: Start" Aug 19 00:13:04.944398 kubelet[3438]: I0819 00:13:04.944165 3438 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 00:13:04.944398 kubelet[3438]: I0819 00:13:04.944216 3438 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:13:04.945991 kubelet[3438]: I0819 00:13:04.944650 3438 state_mem.go:75] "Updated machine memory state" Aug 19 00:13:04.965008 kubelet[3438]: I0819 00:13:04.964340 3438 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:13:04.971275 kubelet[3438]: I0819 00:13:04.970687 3438 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:13:04.973115 kubelet[3438]: I0819 00:13:04.970728 3438 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:13:04.976026 kubelet[3438]: I0819 00:13:04.974777 3438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:13:05.045415 kubelet[3438]: E0819 00:13:05.044405 3438 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-10\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.055769 kubelet[3438]: E0819 00:13:05.055726 3438 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-10\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-10" Aug 19 00:13:05.095892 kubelet[3438]: I0819 00:13:05.095817 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.096530 kubelet[3438]: I0819 00:13:05.096331 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.096897 kubelet[3438]: I0819 00:13:05.096756 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b5a1789e85ec8bf7e45a361860db75f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-10\" (UID: \"5b5a1789e85ec8bf7e45a361860db75f\") " pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:13:05.097104 kubelet[3438]: I0819 00:13:05.097081 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b5a1789e85ec8bf7e45a361860db75f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-10\" (UID: \"5b5a1789e85ec8bf7e45a361860db75f\") " pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:13:05.097490 kubelet[3438]: I0819 00:13:05.097427 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b5a1789e85ec8bf7e45a361860db75f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-10\" (UID: \"5b5a1789e85ec8bf7e45a361860db75f\") " pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:13:05.097984 kubelet[3438]: I0819 00:13:05.097863 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.098361 kubelet[3438]: I0819 00:13:05.098247 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.098970 kubelet[3438]: I0819 00:13:05.098668 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e28acbfdbde39a9b6fcb734fccac03e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-10\" (UID: \"8e28acbfdbde39a9b6fcb734fccac03e\") " pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.099262 kubelet[3438]: I0819 00:13:05.099110 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6fd3013d43bff7c03fd2525661f9215-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-10\" (UID: \"a6fd3013d43bff7c03fd2525661f9215\") " pod="kube-system/kube-scheduler-ip-172-31-30-10" Aug 19 00:13:05.105976 kubelet[3438]: I0819 00:13:05.105494 3438 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-10" Aug 19 00:13:05.127711 kubelet[3438]: I0819 00:13:05.127673 3438 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-30-10" Aug 19 00:13:05.128202 kubelet[3438]: I0819 00:13:05.128177 3438 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-10" Aug 19 00:13:05.659057 kubelet[3438]: I0819 00:13:05.658976 3438 apiserver.go:52] "Watching apiserver" Aug 19 00:13:05.691699 kubelet[3438]: I0819 00:13:05.691648 3438 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 00:13:05.886492 kubelet[3438]: E0819 00:13:05.886298 3438 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-10\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-10" Aug 19 00:13:05.891797 kubelet[3438]: E0819 00:13:05.891742 3438 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-10\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-10" Aug 19 00:13:05.951928 kubelet[3438]: I0819 00:13:05.951737 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-10" podStartSLOduration=0.951697273 podStartE2EDuration="951.697273ms" podCreationTimestamp="2025-08-19 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:05.931394605 +0000 UTC m=+1.403523260" watchObservedRunningTime="2025-08-19 00:13:05.951697273 +0000 UTC m=+1.423825904" Aug 19 00:13:05.953490 kubelet[3438]: I0819 00:13:05.953407 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-10" podStartSLOduration=2.953358877 podStartE2EDuration="2.953358877s" podCreationTimestamp="2025-08-19 00:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:05.952312525 +0000 UTC m=+1.424441144" watchObservedRunningTime="2025-08-19 00:13:05.953358877 +0000 UTC m=+1.425487520" Aug 19 00:13:05.990979 kubelet[3438]: I0819 00:13:05.990785 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-10" podStartSLOduration=3.990763477 podStartE2EDuration="3.990763477s" podCreationTimestamp="2025-08-19 00:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:05.968128957 +0000 UTC m=+1.440257588" watchObservedRunningTime="2025-08-19 00:13:05.990763477 +0000 UTC m=+1.462892108" Aug 19 00:13:09.454238 kubelet[3438]: I0819 00:13:09.454190 3438 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 00:13:09.454856 containerd[2014]: time="2025-08-19T00:13:09.454704674Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 00:13:09.455288 kubelet[3438]: I0819 00:13:09.455015 3438 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 00:13:10.278180 systemd[1]: Created slice kubepods-besteffort-pod1a684b21_4f32_4363_bfae_c95ab00ae958.slice - libcontainer container kubepods-besteffort-pod1a684b21_4f32_4363_bfae_c95ab00ae958.slice. Aug 19 00:13:10.335406 kubelet[3438]: I0819 00:13:10.334061 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a684b21-4f32-4363-bfae-c95ab00ae958-kube-proxy\") pod \"kube-proxy-bknh2\" (UID: \"1a684b21-4f32-4363-bfae-c95ab00ae958\") " pod="kube-system/kube-proxy-bknh2" Aug 19 00:13:10.335406 kubelet[3438]: I0819 00:13:10.334126 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5t7\" (UniqueName: \"kubernetes.io/projected/1a684b21-4f32-4363-bfae-c95ab00ae958-kube-api-access-sp5t7\") pod \"kube-proxy-bknh2\" (UID: \"1a684b21-4f32-4363-bfae-c95ab00ae958\") " pod="kube-system/kube-proxy-bknh2" Aug 19 00:13:10.335406 kubelet[3438]: I0819 00:13:10.334168 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a684b21-4f32-4363-bfae-c95ab00ae958-lib-modules\") pod \"kube-proxy-bknh2\" (UID: \"1a684b21-4f32-4363-bfae-c95ab00ae958\") " pod="kube-system/kube-proxy-bknh2" Aug 19 00:13:10.335406 kubelet[3438]: I0819 00:13:10.334205 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a684b21-4f32-4363-bfae-c95ab00ae958-xtables-lock\") pod \"kube-proxy-bknh2\" (UID: \"1a684b21-4f32-4363-bfae-c95ab00ae958\") " pod="kube-system/kube-proxy-bknh2" Aug 19 00:13:10.593193 containerd[2014]: time="2025-08-19T00:13:10.592781992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bknh2,Uid:1a684b21-4f32-4363-bfae-c95ab00ae958,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:10.634764 containerd[2014]: time="2025-08-19T00:13:10.634682716Z" level=info msg="connecting to shim 4ebdfd0912517038676497670b8fad776afee29992ddc59550660d22975463e0" address="unix:///run/containerd/s/e6368357bf592f6563d499bc6e6d601e6892374b7ed4bafdeb22db24b6c40a72" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:10.700894 systemd[1]: Started cri-containerd-4ebdfd0912517038676497670b8fad776afee29992ddc59550660d22975463e0.scope - libcontainer container 4ebdfd0912517038676497670b8fad776afee29992ddc59550660d22975463e0. Aug 19 00:13:10.737261 kubelet[3438]: I0819 00:13:10.737095 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jtb\" (UniqueName: \"kubernetes.io/projected/9aa23eb4-974b-4bc2-a216-648d1fcd05d2-kube-api-access-g9jtb\") pod \"tigera-operator-5bf8dfcb4-5vrvm\" (UID: \"9aa23eb4-974b-4bc2-a216-648d1fcd05d2\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-5vrvm" Aug 19 00:13:10.737261 kubelet[3438]: I0819 00:13:10.737161 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9aa23eb4-974b-4bc2-a216-648d1fcd05d2-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-5vrvm\" (UID: \"9aa23eb4-974b-4bc2-a216-648d1fcd05d2\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-5vrvm" Aug 19 00:13:10.748423 systemd[1]: Created slice kubepods-besteffort-pod9aa23eb4_974b_4bc2_a216_648d1fcd05d2.slice - libcontainer container kubepods-besteffort-pod9aa23eb4_974b_4bc2_a216_648d1fcd05d2.slice. Aug 19 00:13:10.795470 containerd[2014]: time="2025-08-19T00:13:10.795320957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bknh2,Uid:1a684b21-4f32-4363-bfae-c95ab00ae958,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ebdfd0912517038676497670b8fad776afee29992ddc59550660d22975463e0\"" Aug 19 00:13:10.803347 containerd[2014]: time="2025-08-19T00:13:10.803238389Z" level=info msg="CreateContainer within sandbox \"4ebdfd0912517038676497670b8fad776afee29992ddc59550660d22975463e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 00:13:10.825535 containerd[2014]: time="2025-08-19T00:13:10.824575145Z" level=info msg="Container 3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:10.827118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421363152.mount: Deactivated successfully. Aug 19 00:13:10.840329 containerd[2014]: time="2025-08-19T00:13:10.840272105Z" level=info msg="CreateContainer within sandbox \"4ebdfd0912517038676497670b8fad776afee29992ddc59550660d22975463e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855\"" Aug 19 00:13:10.842614 containerd[2014]: time="2025-08-19T00:13:10.842534321Z" level=info msg="StartContainer for \"3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855\"" Aug 19 00:13:10.848928 containerd[2014]: time="2025-08-19T00:13:10.847721465Z" level=info msg="connecting to shim 3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855" address="unix:///run/containerd/s/e6368357bf592f6563d499bc6e6d601e6892374b7ed4bafdeb22db24b6c40a72" protocol=ttrpc version=3 Aug 19 00:13:10.888701 systemd[1]: Started cri-containerd-3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855.scope - libcontainer container 3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855. Aug 19 00:13:10.992472 containerd[2014]: time="2025-08-19T00:13:10.992398950Z" level=info msg="StartContainer for \"3bac3635fb1dba29cbc2f5fed0d17ff02189a188edce5b0f584f64cf38063855\" returns successfully" Aug 19 00:13:11.056780 containerd[2014]: time="2025-08-19T00:13:11.056727302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-5vrvm,Uid:9aa23eb4-974b-4bc2-a216-648d1fcd05d2,Namespace:tigera-operator,Attempt:0,}" Aug 19 00:13:11.092555 containerd[2014]: time="2025-08-19T00:13:11.092043926Z" level=info msg="connecting to shim 51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058" address="unix:///run/containerd/s/dab534132c5b739293446850815fe905c73606a641caa5351f18434e552b6ac3" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:11.148018 systemd[1]: Started cri-containerd-51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058.scope - libcontainer container 51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058. Aug 19 00:13:11.272282 containerd[2014]: time="2025-08-19T00:13:11.272135619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-5vrvm,Uid:9aa23eb4-974b-4bc2-a216-648d1fcd05d2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058\"" Aug 19 00:13:11.280473 containerd[2014]: time="2025-08-19T00:13:11.279716211Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 19 00:13:11.921404 kubelet[3438]: I0819 00:13:11.920740 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bknh2" podStartSLOduration=1.920715402 podStartE2EDuration="1.920715402s" podCreationTimestamp="2025-08-19 00:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:11.919649982 +0000 UTC m=+7.391778613" watchObservedRunningTime="2025-08-19 00:13:11.920715402 +0000 UTC m=+7.392844045" Aug 19 00:13:12.695469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683525736.mount: Deactivated successfully. Aug 19 00:13:13.416852 containerd[2014]: time="2025-08-19T00:13:13.416775354Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:13.419673 containerd[2014]: time="2025-08-19T00:13:13.419578302Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 19 00:13:13.422403 containerd[2014]: time="2025-08-19T00:13:13.422191194Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:13.429392 containerd[2014]: time="2025-08-19T00:13:13.429319182Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:13.432520 containerd[2014]: time="2025-08-19T00:13:13.432410334Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.152593983s" Aug 19 00:13:13.432520 containerd[2014]: time="2025-08-19T00:13:13.432463062Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 19 00:13:13.437653 containerd[2014]: time="2025-08-19T00:13:13.437588646Z" level=info msg="CreateContainer within sandbox \"51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 19 00:13:13.451411 containerd[2014]: time="2025-08-19T00:13:13.451271514Z" level=info msg="Container 3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:13.464054 containerd[2014]: time="2025-08-19T00:13:13.463980534Z" level=info msg="CreateContainer within sandbox \"51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\"" Aug 19 00:13:13.466683 containerd[2014]: time="2025-08-19T00:13:13.466075278Z" level=info msg="StartContainer for \"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\"" Aug 19 00:13:13.467812 containerd[2014]: time="2025-08-19T00:13:13.467724930Z" level=info msg="connecting to shim 3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d" address="unix:///run/containerd/s/dab534132c5b739293446850815fe905c73606a641caa5351f18434e552b6ac3" protocol=ttrpc version=3 Aug 19 00:13:13.498841 systemd[1]: Started cri-containerd-3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d.scope - libcontainer container 3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d. Aug 19 00:13:13.551923 containerd[2014]: time="2025-08-19T00:13:13.551839734Z" level=info msg="StartContainer for \"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\" returns successfully" Aug 19 00:13:14.681747 kubelet[3438]: I0819 00:13:14.681656 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vrvm" podStartSLOduration=2.525227417 podStartE2EDuration="4.681633464s" podCreationTimestamp="2025-08-19 00:13:10 +0000 UTC" firstStartedPulling="2025-08-19 00:13:11.277597023 +0000 UTC m=+6.749725630" lastFinishedPulling="2025-08-19 00:13:13.434003058 +0000 UTC m=+8.906131677" observedRunningTime="2025-08-19 00:13:13.932034212 +0000 UTC m=+9.404162843" watchObservedRunningTime="2025-08-19 00:13:14.681633464 +0000 UTC m=+10.153762083" Aug 19 00:13:20.246911 sudo[2392]: pam_unix(sudo:session): session closed for user root Aug 19 00:13:20.272714 sshd[2391]: Connection closed by 147.75.109.163 port 33014 Aug 19 00:13:20.273680 sshd-session[2377]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:20.286845 systemd-logind[1986]: Session 9 logged out. Waiting for processes to exit. Aug 19 00:13:20.288330 systemd[1]: sshd@8-172.31.30.10:22-147.75.109.163:33014.service: Deactivated successfully. Aug 19 00:13:20.298409 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 00:13:20.299040 systemd[1]: session-9.scope: Consumed 10.881s CPU time, 217.2M memory peak. Aug 19 00:13:20.304217 systemd-logind[1986]: Removed session 9. Aug 19 00:13:30.756143 systemd[1]: Created slice kubepods-besteffort-pod53a84473_3eda_4a83_a2c2_fec0d5bf827f.slice - libcontainer container kubepods-besteffort-pod53a84473_3eda_4a83_a2c2_fec0d5bf827f.slice. Aug 19 00:13:30.869853 kubelet[3438]: I0819 00:13:30.869676 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/53a84473-3eda-4a83-a2c2-fec0d5bf827f-typha-certs\") pod \"calico-typha-6fff9d6d87-x5npr\" (UID: \"53a84473-3eda-4a83-a2c2-fec0d5bf827f\") " pod="calico-system/calico-typha-6fff9d6d87-x5npr" Aug 19 00:13:30.869853 kubelet[3438]: I0819 00:13:30.869751 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53a84473-3eda-4a83-a2c2-fec0d5bf827f-tigera-ca-bundle\") pod \"calico-typha-6fff9d6d87-x5npr\" (UID: \"53a84473-3eda-4a83-a2c2-fec0d5bf827f\") " pod="calico-system/calico-typha-6fff9d6d87-x5npr" Aug 19 00:13:30.869853 kubelet[3438]: I0819 00:13:30.869794 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6txp\" (UniqueName: \"kubernetes.io/projected/53a84473-3eda-4a83-a2c2-fec0d5bf827f-kube-api-access-n6txp\") pod \"calico-typha-6fff9d6d87-x5npr\" (UID: \"53a84473-3eda-4a83-a2c2-fec0d5bf827f\") " pod="calico-system/calico-typha-6fff9d6d87-x5npr" Aug 19 00:13:31.066685 containerd[2014]: time="2025-08-19T00:13:31.065860173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fff9d6d87-x5npr,Uid:53a84473-3eda-4a83-a2c2-fec0d5bf827f,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:31.092512 systemd[1]: Created slice kubepods-besteffort-pod6a4719e8_d6c5_4892_a98f_97acd6969284.slice - libcontainer container kubepods-besteffort-pod6a4719e8_d6c5_4892_a98f_97acd6969284.slice. Aug 19 00:13:31.149755 containerd[2014]: time="2025-08-19T00:13:31.149608306Z" level=info msg="connecting to shim 2896da6ad0700f9b0bd2fa8064431653c4e73bbc5a201fedaeeea7a0cdedf5c0" address="unix:///run/containerd/s/0c36d0f49b32c8c6bd2a2c8439eaa911ef9fea258dc69ed5b2edd3e8c21ebe78" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:31.171619 kubelet[3438]: I0819 00:13:31.171567 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-cni-net-dir\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175410 kubelet[3438]: I0819 00:13:31.173451 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-xtables-lock\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175410 kubelet[3438]: I0819 00:13:31.173522 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-lib-modules\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175410 kubelet[3438]: I0819 00:13:31.173559 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-cni-log-dir\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175410 kubelet[3438]: I0819 00:13:31.173594 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-var-lib-calico\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175410 kubelet[3438]: I0819 00:13:31.173639 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-var-run-calico\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175785 kubelet[3438]: I0819 00:13:31.173675 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4719e8-d6c5-4892-a98f-97acd6969284-tigera-ca-bundle\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175785 kubelet[3438]: I0819 00:13:31.173716 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-cni-bin-dir\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175785 kubelet[3438]: I0819 00:13:31.173753 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-flexvol-driver-host\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175785 kubelet[3438]: I0819 00:13:31.173789 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6a4719e8-d6c5-4892-a98f-97acd6969284-policysync\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.175785 kubelet[3438]: I0819 00:13:31.173835 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6a4719e8-d6c5-4892-a98f-97acd6969284-node-certs\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.176037 kubelet[3438]: I0819 00:13:31.173870 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqql\" (UniqueName: \"kubernetes.io/projected/6a4719e8-d6c5-4892-a98f-97acd6969284-kube-api-access-qrqql\") pod \"calico-node-dm2ns\" (UID: \"6a4719e8-d6c5-4892-a98f-97acd6969284\") " pod="calico-system/calico-node-dm2ns" Aug 19 00:13:31.223844 systemd[1]: Started cri-containerd-2896da6ad0700f9b0bd2fa8064431653c4e73bbc5a201fedaeeea7a0cdedf5c0.scope - libcontainer container 2896da6ad0700f9b0bd2fa8064431653c4e73bbc5a201fedaeeea7a0cdedf5c0. Aug 19 00:13:31.282412 kubelet[3438]: E0819 00:13:31.281965 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.282923 kubelet[3438]: W0819 00:13:31.282618 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.282923 kubelet[3438]: E0819 00:13:31.282671 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.288386 kubelet[3438]: E0819 00:13:31.287777 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.291411 kubelet[3438]: W0819 00:13:31.287813 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.291411 kubelet[3438]: E0819 00:13:31.289578 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.321829 kubelet[3438]: E0819 00:13:31.320296 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xnqj" podUID="b50a5f61-1810-4b90-b687-04d75015225f" Aug 19 00:13:31.343658 kubelet[3438]: E0819 00:13:31.342756 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.343658 kubelet[3438]: W0819 00:13:31.342803 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.343658 kubelet[3438]: E0819 00:13:31.342839 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.377636 kubelet[3438]: E0819 00:13:31.377579 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.377783 kubelet[3438]: W0819 00:13:31.377643 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.377783 kubelet[3438]: E0819 00:13:31.377681 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.379580 kubelet[3438]: E0819 00:13:31.379504 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.379580 kubelet[3438]: W0819 00:13:31.379568 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.379900 kubelet[3438]: E0819 00:13:31.379603 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.381198 kubelet[3438]: E0819 00:13:31.381146 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.381198 kubelet[3438]: W0819 00:13:31.381197 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.381442 kubelet[3438]: E0819 00:13:31.381231 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.381811 kubelet[3438]: E0819 00:13:31.381770 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.381900 kubelet[3438]: W0819 00:13:31.381802 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.381900 kubelet[3438]: E0819 00:13:31.381853 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.382976 kubelet[3438]: E0819 00:13:31.382926 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.382976 kubelet[3438]: W0819 00:13:31.382965 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.383166 kubelet[3438]: E0819 00:13:31.382997 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.383665 kubelet[3438]: E0819 00:13:31.383619 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.383665 kubelet[3438]: W0819 00:13:31.383653 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.383851 kubelet[3438]: E0819 00:13:31.383681 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.385795 kubelet[3438]: E0819 00:13:31.385736 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.385795 kubelet[3438]: W0819 00:13:31.385777 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.385961 kubelet[3438]: E0819 00:13:31.385812 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.386880 kubelet[3438]: E0819 00:13:31.386825 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.386880 kubelet[3438]: W0819 00:13:31.386865 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.387073 kubelet[3438]: E0819 00:13:31.386897 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.389552 kubelet[3438]: E0819 00:13:31.389499 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.389552 kubelet[3438]: W0819 00:13:31.389539 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.389748 kubelet[3438]: E0819 00:13:31.389574 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.390001 kubelet[3438]: E0819 00:13:31.389963 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.390001 kubelet[3438]: W0819 00:13:31.389991 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.390131 kubelet[3438]: E0819 00:13:31.390016 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.390606 kubelet[3438]: E0819 00:13:31.390562 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.390606 kubelet[3438]: W0819 00:13:31.390596 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.390762 kubelet[3438]: E0819 00:13:31.390623 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.391439 kubelet[3438]: E0819 00:13:31.391137 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.391439 kubelet[3438]: W0819 00:13:31.391429 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.391616 kubelet[3438]: E0819 00:13:31.391461 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.392214 kubelet[3438]: E0819 00:13:31.392164 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.392214 kubelet[3438]: W0819 00:13:31.392202 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.392424 kubelet[3438]: E0819 00:13:31.392232 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.393178 kubelet[3438]: E0819 00:13:31.393133 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.393178 kubelet[3438]: W0819 00:13:31.393169 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.393360 kubelet[3438]: E0819 00:13:31.393201 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.393999 kubelet[3438]: E0819 00:13:31.393944 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.394204 kubelet[3438]: W0819 00:13:31.394109 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.394204 kubelet[3438]: E0819 00:13:31.394143 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.395766 kubelet[3438]: E0819 00:13:31.395696 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.395766 kubelet[3438]: W0819 00:13:31.395752 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.395978 kubelet[3438]: E0819 00:13:31.395786 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.396193 kubelet[3438]: E0819 00:13:31.396159 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.396193 kubelet[3438]: W0819 00:13:31.396188 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.396330 kubelet[3438]: E0819 00:13:31.396214 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.396888 kubelet[3438]: E0819 00:13:31.396838 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.396888 kubelet[3438]: W0819 00:13:31.396876 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.397083 kubelet[3438]: E0819 00:13:31.396906 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.397727 kubelet[3438]: E0819 00:13:31.397670 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.397727 kubelet[3438]: W0819 00:13:31.397707 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.397906 kubelet[3438]: E0819 00:13:31.397739 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.399348 kubelet[3438]: E0819 00:13:31.399287 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.399348 kubelet[3438]: W0819 00:13:31.399328 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.399560 kubelet[3438]: E0819 00:13:31.399362 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.413640 containerd[2014]: time="2025-08-19T00:13:31.413224607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dm2ns,Uid:6a4719e8-d6c5-4892-a98f-97acd6969284,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:31.455819 containerd[2014]: time="2025-08-19T00:13:31.455759447Z" level=info msg="connecting to shim 7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e" address="unix:///run/containerd/s/6e800f24df024e335b17ca684d9bac2a169f4c0e98494381edd704c7b6934c14" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:31.478885 kubelet[3438]: E0819 00:13:31.478848 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.479394 kubelet[3438]: W0819 00:13:31.479079 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.479394 kubelet[3438]: E0819 00:13:31.479123 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.479394 kubelet[3438]: I0819 00:13:31.479181 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b50a5f61-1810-4b90-b687-04d75015225f-registration-dir\") pod \"csi-node-driver-2xnqj\" (UID: \"b50a5f61-1810-4b90-b687-04d75015225f\") " pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:31.479944 kubelet[3438]: E0819 00:13:31.479701 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.479944 kubelet[3438]: W0819 00:13:31.479781 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.479944 kubelet[3438]: E0819 00:13:31.479825 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.479944 kubelet[3438]: I0819 00:13:31.479864 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k922m\" (UniqueName: \"kubernetes.io/projected/b50a5f61-1810-4b90-b687-04d75015225f-kube-api-access-k922m\") pod \"csi-node-driver-2xnqj\" (UID: \"b50a5f61-1810-4b90-b687-04d75015225f\") " pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:31.482366 kubelet[3438]: E0819 00:13:31.480638 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.482366 kubelet[3438]: W0819 00:13:31.480691 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.482366 kubelet[3438]: E0819 00:13:31.480741 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.482366 kubelet[3438]: E0819 00:13:31.481665 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.482366 kubelet[3438]: W0819 00:13:31.481692 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.482366 kubelet[3438]: E0819 00:13:31.481727 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.482886 kubelet[3438]: E0819 00:13:31.482364 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.482886 kubelet[3438]: W0819 00:13:31.482536 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.482886 kubelet[3438]: E0819 00:13:31.482568 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.482886 kubelet[3438]: I0819 00:13:31.482609 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b50a5f61-1810-4b90-b687-04d75015225f-varrun\") pod \"csi-node-driver-2xnqj\" (UID: \"b50a5f61-1810-4b90-b687-04d75015225f\") " pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:31.483945 kubelet[3438]: E0819 00:13:31.483884 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.483945 kubelet[3438]: W0819 00:13:31.483931 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.484323 kubelet[3438]: E0819 00:13:31.484176 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.484959 kubelet[3438]: E0819 00:13:31.484233 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.484959 kubelet[3438]: I0819 00:13:31.484457 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b50a5f61-1810-4b90-b687-04d75015225f-kubelet-dir\") pod \"csi-node-driver-2xnqj\" (UID: \"b50a5f61-1810-4b90-b687-04d75015225f\") " pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:31.484959 kubelet[3438]: W0819 00:13:31.484460 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.484959 kubelet[3438]: E0819 00:13:31.484524 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.486563 kubelet[3438]: E0819 00:13:31.485539 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.486563 kubelet[3438]: W0819 00:13:31.485571 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.486563 kubelet[3438]: E0819 00:13:31.485614 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.487962 kubelet[3438]: E0819 00:13:31.487834 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.487962 kubelet[3438]: W0819 00:13:31.487869 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.487962 kubelet[3438]: E0819 00:13:31.487916 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.489338 kubelet[3438]: E0819 00:13:31.488636 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.489338 kubelet[3438]: W0819 00:13:31.488681 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.489338 kubelet[3438]: E0819 00:13:31.488727 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.489667 kubelet[3438]: E0819 00:13:31.489637 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.489667 kubelet[3438]: W0819 00:13:31.489662 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.489776 kubelet[3438]: E0819 00:13:31.489693 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.489776 kubelet[3438]: I0819 00:13:31.489734 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b50a5f61-1810-4b90-b687-04d75015225f-socket-dir\") pod \"csi-node-driver-2xnqj\" (UID: \"b50a5f61-1810-4b90-b687-04d75015225f\") " pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:31.490908 kubelet[3438]: E0819 00:13:31.490702 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.490908 kubelet[3438]: W0819 00:13:31.490756 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.490908 kubelet[3438]: E0819 00:13:31.490790 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.492021 kubelet[3438]: E0819 00:13:31.491888 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.492021 kubelet[3438]: W0819 00:13:31.491925 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.492021 kubelet[3438]: E0819 00:13:31.491965 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.493597 kubelet[3438]: E0819 00:13:31.493460 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.493597 kubelet[3438]: W0819 00:13:31.493497 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.493597 kubelet[3438]: E0819 00:13:31.493531 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.495665 kubelet[3438]: E0819 00:13:31.495588 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.495665 kubelet[3438]: W0819 00:13:31.495631 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.495665 kubelet[3438]: E0819 00:13:31.495665 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.539228 systemd[1]: Started cri-containerd-7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e.scope - libcontainer container 7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e. Aug 19 00:13:31.596648 kubelet[3438]: E0819 00:13:31.595320 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.596648 kubelet[3438]: W0819 00:13:31.595352 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.596648 kubelet[3438]: E0819 00:13:31.595408 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.598769 kubelet[3438]: E0819 00:13:31.598601 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.598769 kubelet[3438]: W0819 00:13:31.598638 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.598769 kubelet[3438]: E0819 00:13:31.598685 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.603273 kubelet[3438]: E0819 00:13:31.601352 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.603273 kubelet[3438]: W0819 00:13:31.601518 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.603273 kubelet[3438]: E0819 00:13:31.601779 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.603273 kubelet[3438]: E0819 00:13:31.602774 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.603273 kubelet[3438]: W0819 00:13:31.602798 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.603273 kubelet[3438]: E0819 00:13:31.602825 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.605627 kubelet[3438]: E0819 00:13:31.603871 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.605627 kubelet[3438]: W0819 00:13:31.603908 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.605627 kubelet[3438]: E0819 00:13:31.604602 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.605870 kubelet[3438]: E0819 00:13:31.605759 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.605870 kubelet[3438]: W0819 00:13:31.605785 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.607915 kubelet[3438]: E0819 00:13:31.606203 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.607915 kubelet[3438]: E0819 00:13:31.607042 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.607915 kubelet[3438]: W0819 00:13:31.607068 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.607915 kubelet[3438]: E0819 00:13:31.607415 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.609434 kubelet[3438]: E0819 00:13:31.608709 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.609434 kubelet[3438]: W0819 00:13:31.608747 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.609434 kubelet[3438]: E0819 00:13:31.608994 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.609434 kubelet[3438]: E0819 00:13:31.609126 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.609434 kubelet[3438]: W0819 00:13:31.609142 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.609434 kubelet[3438]: E0819 00:13:31.609355 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.610302 kubelet[3438]: E0819 00:13:31.610025 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.610302 kubelet[3438]: W0819 00:13:31.610059 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.611513 kubelet[3438]: E0819 00:13:31.610740 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.611513 kubelet[3438]: E0819 00:13:31.611548 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.611513 kubelet[3438]: W0819 00:13:31.611571 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.612506 kubelet[3438]: E0819 00:13:31.612406 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.613316 kubelet[3438]: E0819 00:13:31.612776 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.613316 kubelet[3438]: W0819 00:13:31.612798 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.613316 kubelet[3438]: E0819 00:13:31.613010 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.613553 kubelet[3438]: E0819 00:13:31.613508 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.613553 kubelet[3438]: W0819 00:13:31.613528 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.614717 kubelet[3438]: E0819 00:13:31.613835 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.614717 kubelet[3438]: E0819 00:13:31.614559 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.614717 kubelet[3438]: W0819 00:13:31.614584 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.615282 kubelet[3438]: E0819 00:13:31.615198 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.615454 kubelet[3438]: E0819 00:13:31.615365 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.615454 kubelet[3438]: W0819 00:13:31.615404 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.615775 kubelet[3438]: E0819 00:13:31.615736 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.616542 kubelet[3438]: E0819 00:13:31.616500 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.616542 kubelet[3438]: W0819 00:13:31.616532 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.617863 kubelet[3438]: E0819 00:13:31.617752 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.618823 kubelet[3438]: E0819 00:13:31.618027 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.618823 kubelet[3438]: W0819 00:13:31.618057 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.618823 kubelet[3438]: E0819 00:13:31.618587 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.619098 kubelet[3438]: E0819 00:13:31.618866 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.619098 kubelet[3438]: W0819 00:13:31.618884 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.619853 kubelet[3438]: E0819 00:13:31.619422 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.621698 kubelet[3438]: E0819 00:13:31.621643 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.621698 kubelet[3438]: W0819 00:13:31.621680 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.621977 kubelet[3438]: E0819 00:13:31.621813 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.622153 kubelet[3438]: E0819 00:13:31.622121 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.622153 kubelet[3438]: W0819 00:13:31.622147 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.622458 kubelet[3438]: E0819 00:13:31.622366 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.622588 kubelet[3438]: E0819 00:13:31.622541 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.622588 kubelet[3438]: W0819 00:13:31.622557 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.622712 kubelet[3438]: E0819 00:13:31.622663 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.623698 kubelet[3438]: E0819 00:13:31.623649 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.623698 kubelet[3438]: W0819 00:13:31.623691 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.625183 kubelet[3438]: E0819 00:13:31.625110 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.625366 kubelet[3438]: E0819 00:13:31.625327 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.625366 kubelet[3438]: W0819 00:13:31.625358 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.625939 kubelet[3438]: E0819 00:13:31.625453 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.625939 kubelet[3438]: E0819 00:13:31.625749 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.625939 kubelet[3438]: W0819 00:13:31.625768 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.625939 kubelet[3438]: E0819 00:13:31.625789 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.628874 kubelet[3438]: E0819 00:13:31.628745 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.629082 kubelet[3438]: W0819 00:13:31.629053 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.629289 kubelet[3438]: E0819 00:13:31.629262 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.669745 containerd[2014]: time="2025-08-19T00:13:31.669626376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dm2ns,Uid:6a4719e8-d6c5-4892-a98f-97acd6969284,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\"" Aug 19 00:13:31.672923 containerd[2014]: time="2025-08-19T00:13:31.672609216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 19 00:13:31.696340 kubelet[3438]: E0819 00:13:31.696280 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.697656 kubelet[3438]: W0819 00:13:31.696315 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.697656 kubelet[3438]: E0819 00:13:31.697482 3438 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.762526 containerd[2014]: time="2025-08-19T00:13:31.762456181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fff9d6d87-x5npr,Uid:53a84473-3eda-4a83-a2c2-fec0d5bf827f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2896da6ad0700f9b0bd2fa8064431653c4e73bbc5a201fedaeeea7a0cdedf5c0\"" Aug 19 00:13:32.701773 kubelet[3438]: E0819 00:13:32.701688 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xnqj" podUID="b50a5f61-1810-4b90-b687-04d75015225f" Aug 19 00:13:32.822826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671938336.mount: Deactivated successfully. Aug 19 00:13:33.007110 containerd[2014]: time="2025-08-19T00:13:33.006893651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:33.008239 containerd[2014]: time="2025-08-19T00:13:33.008194127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Aug 19 00:13:33.009316 containerd[2014]: time="2025-08-19T00:13:33.009259535Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:33.015682 containerd[2014]: time="2025-08-19T00:13:33.015597455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:33.019402 containerd[2014]: time="2025-08-19T00:13:33.019305263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.346447407s" Aug 19 00:13:33.019402 containerd[2014]: time="2025-08-19T00:13:33.019368947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 19 00:13:33.022278 containerd[2014]: time="2025-08-19T00:13:33.022075967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 19 00:13:33.024743 containerd[2014]: time="2025-08-19T00:13:33.024646247Z" level=info msg="CreateContainer within sandbox \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 19 00:13:33.043405 containerd[2014]: time="2025-08-19T00:13:33.042657659Z" level=info msg="Container 6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:33.063084 containerd[2014]: time="2025-08-19T00:13:33.063006323Z" level=info msg="CreateContainer within sandbox \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\"" Aug 19 00:13:33.064294 containerd[2014]: time="2025-08-19T00:13:33.064235675Z" level=info msg="StartContainer for \"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\"" Aug 19 00:13:33.068842 containerd[2014]: time="2025-08-19T00:13:33.068741519Z" level=info msg="connecting to shim 6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba" address="unix:///run/containerd/s/6e800f24df024e335b17ca684d9bac2a169f4c0e98494381edd704c7b6934c14" protocol=ttrpc version=3 Aug 19 00:13:33.120779 systemd[1]: Started cri-containerd-6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba.scope - libcontainer container 6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba. Aug 19 00:13:33.228403 containerd[2014]: time="2025-08-19T00:13:33.228030780Z" level=info msg="StartContainer for \"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\" returns successfully" Aug 19 00:13:33.247334 systemd[1]: cri-containerd-6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba.scope: Deactivated successfully. Aug 19 00:13:33.258885 containerd[2014]: time="2025-08-19T00:13:33.258670260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\" id:\"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\" pid:4026 exited_at:{seconds:1755562413 nanos:257814036}" Aug 19 00:13:33.258885 containerd[2014]: time="2025-08-19T00:13:33.258720180Z" level=info msg="received exit event container_id:\"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\" id:\"6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba\" pid:4026 exited_at:{seconds:1755562413 nanos:257814036}" Aug 19 00:13:33.308302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6047851ef1c829cb976841dc0dd302347fdc0a33986dd8908a6b739ba28461ba-rootfs.mount: Deactivated successfully. Aug 19 00:13:34.701570 kubelet[3438]: E0819 00:13:34.701353 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xnqj" podUID="b50a5f61-1810-4b90-b687-04d75015225f" Aug 19 00:13:35.324314 containerd[2014]: time="2025-08-19T00:13:35.324217971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:35.326840 containerd[2014]: time="2025-08-19T00:13:35.326786235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Aug 19 00:13:35.328367 containerd[2014]: time="2025-08-19T00:13:35.328253235Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:35.333470 containerd[2014]: time="2025-08-19T00:13:35.332863107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:35.334367 containerd[2014]: time="2025-08-19T00:13:35.334067259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.311637232s" Aug 19 00:13:35.334367 containerd[2014]: time="2025-08-19T00:13:35.334121331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 19 00:13:35.336655 containerd[2014]: time="2025-08-19T00:13:35.336589719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 19 00:13:35.375760 containerd[2014]: time="2025-08-19T00:13:35.375710223Z" level=info msg="CreateContainer within sandbox \"2896da6ad0700f9b0bd2fa8064431653c4e73bbc5a201fedaeeea7a0cdedf5c0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 19 00:13:35.389942 containerd[2014]: time="2025-08-19T00:13:35.389675463Z" level=info msg="Container 4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:35.398779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513870564.mount: Deactivated successfully. Aug 19 00:13:35.409928 containerd[2014]: time="2025-08-19T00:13:35.409868715Z" level=info msg="CreateContainer within sandbox \"2896da6ad0700f9b0bd2fa8064431653c4e73bbc5a201fedaeeea7a0cdedf5c0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe\"" Aug 19 00:13:35.411330 containerd[2014]: time="2025-08-19T00:13:35.411204675Z" level=info msg="StartContainer for \"4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe\"" Aug 19 00:13:35.413795 containerd[2014]: time="2025-08-19T00:13:35.413672835Z" level=info msg="connecting to shim 4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe" address="unix:///run/containerd/s/0c36d0f49b32c8c6bd2a2c8439eaa911ef9fea258dc69ed5b2edd3e8c21ebe78" protocol=ttrpc version=3 Aug 19 00:13:35.459358 systemd[1]: Started cri-containerd-4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe.scope - libcontainer container 4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe. Aug 19 00:13:35.540794 containerd[2014]: time="2025-08-19T00:13:35.540665128Z" level=info msg="StartContainer for \"4725afd0d155e5a3149107e47261a14cf4f831df35578f43e4f686fdbd6e7bfe\" returns successfully" Aug 19 00:13:36.699175 kubelet[3438]: E0819 00:13:36.699107 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xnqj" podUID="b50a5f61-1810-4b90-b687-04d75015225f" Aug 19 00:13:37.025651 kubelet[3438]: I0819 00:13:37.024973 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:13:38.698990 kubelet[3438]: E0819 00:13:38.698892 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xnqj" podUID="b50a5f61-1810-4b90-b687-04d75015225f" Aug 19 00:13:39.010835 containerd[2014]: time="2025-08-19T00:13:39.010672397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:39.012716 containerd[2014]: time="2025-08-19T00:13:39.012637445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 19 00:13:39.013875 containerd[2014]: time="2025-08-19T00:13:39.013797569Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:39.017213 containerd[2014]: time="2025-08-19T00:13:39.017128073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:39.018751 containerd[2014]: time="2025-08-19T00:13:39.018554177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.681768066s" Aug 19 00:13:39.018751 containerd[2014]: time="2025-08-19T00:13:39.018611345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 19 00:13:39.025235 containerd[2014]: time="2025-08-19T00:13:39.024620501Z" level=info msg="CreateContainer within sandbox \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 19 00:13:39.045701 containerd[2014]: time="2025-08-19T00:13:39.041505485Z" level=info msg="Container 81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:39.058945 containerd[2014]: time="2025-08-19T00:13:39.058869389Z" level=info msg="CreateContainer within sandbox \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\"" Aug 19 00:13:39.060872 containerd[2014]: time="2025-08-19T00:13:39.060731837Z" level=info msg="StartContainer for \"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\"" Aug 19 00:13:39.065923 containerd[2014]: time="2025-08-19T00:13:39.065817113Z" level=info msg="connecting to shim 81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd" address="unix:///run/containerd/s/6e800f24df024e335b17ca684d9bac2a169f4c0e98494381edd704c7b6934c14" protocol=ttrpc version=3 Aug 19 00:13:39.107698 systemd[1]: Started cri-containerd-81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd.scope - libcontainer container 81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd. Aug 19 00:13:39.186484 containerd[2014]: time="2025-08-19T00:13:39.186289914Z" level=info msg="StartContainer for \"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\" returns successfully" Aug 19 00:13:40.080123 kubelet[3438]: I0819 00:13:40.079010 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fff9d6d87-x5npr" podStartSLOduration=6.509038996 podStartE2EDuration="10.078984738s" podCreationTimestamp="2025-08-19 00:13:30 +0000 UTC" firstStartedPulling="2025-08-19 00:13:31.766093609 +0000 UTC m=+27.238222228" lastFinishedPulling="2025-08-19 00:13:35.336039351 +0000 UTC m=+30.808167970" observedRunningTime="2025-08-19 00:13:36.159336927 +0000 UTC m=+31.631465546" watchObservedRunningTime="2025-08-19 00:13:40.078984738 +0000 UTC m=+35.551113345" Aug 19 00:13:40.124984 systemd[1]: cri-containerd-81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd.scope: Deactivated successfully. Aug 19 00:13:40.127557 systemd[1]: cri-containerd-81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd.scope: Consumed 906ms CPU time, 188.1M memory peak, 165.8M written to disk. Aug 19 00:13:40.131883 containerd[2014]: time="2025-08-19T00:13:40.131831647Z" level=info msg="received exit event container_id:\"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\" id:\"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\" pid:4130 exited_at:{seconds:1755562420 nanos:131448463}" Aug 19 00:13:40.132738 containerd[2014]: time="2025-08-19T00:13:40.132449743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\" id:\"81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd\" pid:4130 exited_at:{seconds:1755562420 nanos:131448463}" Aug 19 00:13:40.202388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81efcac83e02448dbf3a56ce9022dddf13f82c3e3208009fd90c0911bd5c2ddd-rootfs.mount: Deactivated successfully. Aug 19 00:13:40.207978 kubelet[3438]: I0819 00:13:40.204748 3438 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 19 00:13:40.358987 systemd[1]: Created slice kubepods-burstable-pod0457175a_30ff_4c6f_8d8a_6f85a6adf32c.slice - libcontainer container kubepods-burstable-pod0457175a_30ff_4c6f_8d8a_6f85a6adf32c.slice. Aug 19 00:13:40.385330 systemd[1]: Created slice kubepods-burstable-podeb9c15ce_3378_4741_a684_0a8392a9106d.slice - libcontainer container kubepods-burstable-podeb9c15ce_3378_4741_a684_0a8392a9106d.slice. Aug 19 00:13:40.387239 kubelet[3438]: I0819 00:13:40.386063 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jjq\" (UniqueName: \"kubernetes.io/projected/799803ff-968b-4bdb-a045-65b70869eec0-kube-api-access-l2jjq\") pod \"calico-apiserver-6554fc4f99-fnmld\" (UID: \"799803ff-968b-4bdb-a045-65b70869eec0\") " pod="calico-apiserver/calico-apiserver-6554fc4f99-fnmld" Aug 19 00:13:40.387239 kubelet[3438]: I0819 00:13:40.386137 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52hc\" (UniqueName: \"kubernetes.io/projected/eb9c15ce-3378-4741-a684-0a8392a9106d-kube-api-access-q52hc\") pod \"coredns-7c65d6cfc9-4kcg8\" (UID: \"eb9c15ce-3378-4741-a684-0a8392a9106d\") " pod="kube-system/coredns-7c65d6cfc9-4kcg8" Aug 19 00:13:40.387239 kubelet[3438]: I0819 00:13:40.386206 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-backend-key-pair\") pod \"whisker-5c789884f5-9dxs5\" (UID: \"da07a4b3-8566-48fa-b7a2-02a956bdba39\") " pod="calico-system/whisker-5c789884f5-9dxs5" Aug 19 00:13:40.387239 kubelet[3438]: I0819 00:13:40.386252 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb9c15ce-3378-4741-a684-0a8392a9106d-config-volume\") pod \"coredns-7c65d6cfc9-4kcg8\" (UID: \"eb9c15ce-3378-4741-a684-0a8392a9106d\") " pod="kube-system/coredns-7c65d6cfc9-4kcg8" Aug 19 00:13:40.387239 kubelet[3438]: I0819 00:13:40.386315 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdpp\" (UniqueName: \"kubernetes.io/projected/da07a4b3-8566-48fa-b7a2-02a956bdba39-kube-api-access-rzdpp\") pod \"whisker-5c789884f5-9dxs5\" (UID: \"da07a4b3-8566-48fa-b7a2-02a956bdba39\") " pod="calico-system/whisker-5c789884f5-9dxs5" Aug 19 00:13:40.388778 kubelet[3438]: I0819 00:13:40.386549 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0457175a-30ff-4c6f-8d8a-6f85a6adf32c-config-volume\") pod \"coredns-7c65d6cfc9-2j9hm\" (UID: \"0457175a-30ff-4c6f-8d8a-6f85a6adf32c\") " pod="kube-system/coredns-7c65d6cfc9-2j9hm" Aug 19 00:13:40.388778 kubelet[3438]: I0819 00:13:40.386821 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-ca-bundle\") pod \"whisker-5c789884f5-9dxs5\" (UID: \"da07a4b3-8566-48fa-b7a2-02a956bdba39\") " pod="calico-system/whisker-5c789884f5-9dxs5" Aug 19 00:13:40.388778 kubelet[3438]: I0819 00:13:40.386906 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msdm5\" (UniqueName: \"kubernetes.io/projected/1ee75311-cda6-4d17-b2c7-d27304b8d82e-kube-api-access-msdm5\") pod \"calico-apiserver-6554fc4f99-j2skv\" (UID: \"1ee75311-cda6-4d17-b2c7-d27304b8d82e\") " pod="calico-apiserver/calico-apiserver-6554fc4f99-j2skv" Aug 19 00:13:40.388778 kubelet[3438]: I0819 00:13:40.386953 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/799803ff-968b-4bdb-a045-65b70869eec0-calico-apiserver-certs\") pod \"calico-apiserver-6554fc4f99-fnmld\" (UID: \"799803ff-968b-4bdb-a045-65b70869eec0\") " pod="calico-apiserver/calico-apiserver-6554fc4f99-fnmld" Aug 19 00:13:40.388778 kubelet[3438]: I0819 00:13:40.387405 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdvl5\" (UniqueName: \"kubernetes.io/projected/0457175a-30ff-4c6f-8d8a-6f85a6adf32c-kube-api-access-gdvl5\") pod \"coredns-7c65d6cfc9-2j9hm\" (UID: \"0457175a-30ff-4c6f-8d8a-6f85a6adf32c\") " pod="kube-system/coredns-7c65d6cfc9-2j9hm" Aug 19 00:13:40.389035 kubelet[3438]: I0819 00:13:40.387451 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ee75311-cda6-4d17-b2c7-d27304b8d82e-calico-apiserver-certs\") pod \"calico-apiserver-6554fc4f99-j2skv\" (UID: \"1ee75311-cda6-4d17-b2c7-d27304b8d82e\") " pod="calico-apiserver/calico-apiserver-6554fc4f99-j2skv" Aug 19 00:13:40.405156 kubelet[3438]: W0819 00:13:40.404988 3438 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-30-10" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-10' and this object Aug 19 00:13:40.405156 kubelet[3438]: E0819 00:13:40.405055 3438 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-30-10\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-30-10' and this object" logger="UnhandledError" Aug 19 00:13:40.408474 systemd[1]: Created slice kubepods-besteffort-podda07a4b3_8566_48fa_b7a2_02a956bdba39.slice - libcontainer container kubepods-besteffort-podda07a4b3_8566_48fa_b7a2_02a956bdba39.slice. Aug 19 00:13:40.426716 systemd[1]: Created slice kubepods-besteffort-pod1ee75311_cda6_4d17_b2c7_d27304b8d82e.slice - libcontainer container kubepods-besteffort-pod1ee75311_cda6_4d17_b2c7_d27304b8d82e.slice. Aug 19 00:13:40.447809 systemd[1]: Created slice kubepods-besteffort-pod799803ff_968b_4bdb_a045_65b70869eec0.slice - libcontainer container kubepods-besteffort-pod799803ff_968b_4bdb_a045_65b70869eec0.slice. Aug 19 00:13:40.470186 systemd[1]: Created slice kubepods-besteffort-pod8766ae7b_04e2_43bb_8e63_39b0bb1900db.slice - libcontainer container kubepods-besteffort-pod8766ae7b_04e2_43bb_8e63_39b0bb1900db.slice. Aug 19 00:13:40.489222 systemd[1]: Created slice kubepods-besteffort-pod43ff735c_411d_42fc_b611_fed61c27fec3.slice - libcontainer container kubepods-besteffort-pod43ff735c_411d_42fc_b611_fed61c27fec3.slice. Aug 19 00:13:40.491326 kubelet[3438]: I0819 00:13:40.491256 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43ff735c-411d-42fc-b611-fed61c27fec3-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-8jbzv\" (UID: \"43ff735c-411d-42fc-b611-fed61c27fec3\") " pod="calico-system/goldmane-58fd7646b9-8jbzv" Aug 19 00:13:40.493205 kubelet[3438]: I0819 00:13:40.493144 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43ff735c-411d-42fc-b611-fed61c27fec3-goldmane-key-pair\") pod \"goldmane-58fd7646b9-8jbzv\" (UID: \"43ff735c-411d-42fc-b611-fed61c27fec3\") " pod="calico-system/goldmane-58fd7646b9-8jbzv" Aug 19 00:13:40.503346 kubelet[3438]: I0819 00:13:40.502323 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzwfz\" (UniqueName: \"kubernetes.io/projected/8766ae7b-04e2-43bb-8e63-39b0bb1900db-kube-api-access-lzwfz\") pod \"calico-kube-controllers-85bbd4d66-rjr5q\" (UID: \"8766ae7b-04e2-43bb-8e63-39b0bb1900db\") " pod="calico-system/calico-kube-controllers-85bbd4d66-rjr5q" Aug 19 00:13:40.511480 kubelet[3438]: I0819 00:13:40.510482 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcwd\" (UniqueName: \"kubernetes.io/projected/43ff735c-411d-42fc-b611-fed61c27fec3-kube-api-access-nhcwd\") pod \"goldmane-58fd7646b9-8jbzv\" (UID: \"43ff735c-411d-42fc-b611-fed61c27fec3\") " pod="calico-system/goldmane-58fd7646b9-8jbzv" Aug 19 00:13:40.511480 kubelet[3438]: I0819 00:13:40.510647 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8766ae7b-04e2-43bb-8e63-39b0bb1900db-tigera-ca-bundle\") pod \"calico-kube-controllers-85bbd4d66-rjr5q\" (UID: \"8766ae7b-04e2-43bb-8e63-39b0bb1900db\") " pod="calico-system/calico-kube-controllers-85bbd4d66-rjr5q" Aug 19 00:13:40.511480 kubelet[3438]: I0819 00:13:40.510722 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ff735c-411d-42fc-b611-fed61c27fec3-config\") pod \"goldmane-58fd7646b9-8jbzv\" (UID: \"43ff735c-411d-42fc-b611-fed61c27fec3\") " pod="calico-system/goldmane-58fd7646b9-8jbzv" Aug 19 00:13:40.673412 containerd[2014]: time="2025-08-19T00:13:40.672344409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2j9hm,Uid:0457175a-30ff-4c6f-8d8a-6f85a6adf32c,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:40.702412 containerd[2014]: time="2025-08-19T00:13:40.700849953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kcg8,Uid:eb9c15ce-3378-4741-a684-0a8392a9106d,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:40.714629 systemd[1]: Created slice kubepods-besteffort-podb50a5f61_1810_4b90_b687_04d75015225f.slice - libcontainer container kubepods-besteffort-podb50a5f61_1810_4b90_b687_04d75015225f.slice. Aug 19 00:13:40.720054 containerd[2014]: time="2025-08-19T00:13:40.719994057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xnqj,Uid:b50a5f61-1810-4b90-b687-04d75015225f,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:40.736368 containerd[2014]: time="2025-08-19T00:13:40.736306114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-j2skv,Uid:1ee75311-cda6-4d17-b2c7-d27304b8d82e,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:40.762091 containerd[2014]: time="2025-08-19T00:13:40.762028282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-fnmld,Uid:799803ff-968b-4bdb-a045-65b70869eec0,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:40.783072 containerd[2014]: time="2025-08-19T00:13:40.783005062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bbd4d66-rjr5q,Uid:8766ae7b-04e2-43bb-8e63-39b0bb1900db,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:40.824475 containerd[2014]: time="2025-08-19T00:13:40.824417554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8jbzv,Uid:43ff735c-411d-42fc-b611-fed61c27fec3,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:41.045306 containerd[2014]: time="2025-08-19T00:13:41.045058123Z" level=error msg="Failed to destroy network for sandbox \"519416ba2c7f4e003a075dbac513f0bf7091788b7aadf792e0778fe4c23ed40e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.338478 containerd[2014]: time="2025-08-19T00:13:41.338059952Z" level=error msg="Failed to destroy network for sandbox \"ad2e6b37d0e9f3c6a4251d855d62b1503aa2c154473627a30f67e51be1e8a647\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.343788 systemd[1]: run-netns-cni\x2de2b494eb\x2d1c3d\x2dd256\x2d48c6\x2d03389597f472.mount: Deactivated successfully. Aug 19 00:13:41.367245 containerd[2014]: time="2025-08-19T00:13:41.366856917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2j9hm,Uid:0457175a-30ff-4c6f-8d8a-6f85a6adf32c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"519416ba2c7f4e003a075dbac513f0bf7091788b7aadf792e0778fe4c23ed40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.367842 kubelet[3438]: E0819 00:13:41.367759 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519416ba2c7f4e003a075dbac513f0bf7091788b7aadf792e0778fe4c23ed40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.370797 kubelet[3438]: E0819 00:13:41.367875 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519416ba2c7f4e003a075dbac513f0bf7091788b7aadf792e0778fe4c23ed40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2j9hm" Aug 19 00:13:41.370797 kubelet[3438]: E0819 00:13:41.367910 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519416ba2c7f4e003a075dbac513f0bf7091788b7aadf792e0778fe4c23ed40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2j9hm" Aug 19 00:13:41.370797 kubelet[3438]: E0819 00:13:41.367983 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2j9hm_kube-system(0457175a-30ff-4c6f-8d8a-6f85a6adf32c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2j9hm_kube-system(0457175a-30ff-4c6f-8d8a-6f85a6adf32c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"519416ba2c7f4e003a075dbac513f0bf7091788b7aadf792e0778fe4c23ed40e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2j9hm" podUID="0457175a-30ff-4c6f-8d8a-6f85a6adf32c" Aug 19 00:13:41.387668 containerd[2014]: time="2025-08-19T00:13:41.387570153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kcg8,Uid:eb9c15ce-3378-4741-a684-0a8392a9106d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad2e6b37d0e9f3c6a4251d855d62b1503aa2c154473627a30f67e51be1e8a647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.388988 kubelet[3438]: E0819 00:13:41.388588 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad2e6b37d0e9f3c6a4251d855d62b1503aa2c154473627a30f67e51be1e8a647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.388988 kubelet[3438]: E0819 00:13:41.388678 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad2e6b37d0e9f3c6a4251d855d62b1503aa2c154473627a30f67e51be1e8a647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4kcg8" Aug 19 00:13:41.388988 kubelet[3438]: E0819 00:13:41.388772 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad2e6b37d0e9f3c6a4251d855d62b1503aa2c154473627a30f67e51be1e8a647\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4kcg8" Aug 19 00:13:41.391146 kubelet[3438]: E0819 00:13:41.388895 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4kcg8_kube-system(eb9c15ce-3378-4741-a684-0a8392a9106d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4kcg8_kube-system(eb9c15ce-3378-4741-a684-0a8392a9106d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad2e6b37d0e9f3c6a4251d855d62b1503aa2c154473627a30f67e51be1e8a647\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4kcg8" podUID="eb9c15ce-3378-4741-a684-0a8392a9106d" Aug 19 00:13:41.619659 containerd[2014]: time="2025-08-19T00:13:41.619592254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c789884f5-9dxs5,Uid:da07a4b3-8566-48fa-b7a2-02a956bdba39,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:41.622939 containerd[2014]: time="2025-08-19T00:13:41.622876222Z" level=error msg="Failed to destroy network for sandbox \"7a59a491b3b025d27f181d917b675ad2e9e909b8d467687803e487d8b44f44c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.629179 systemd[1]: run-netns-cni\x2d6a208c5d\x2dcac4\x2d29b0\x2d9933\x2dfe40a629012e.mount: Deactivated successfully. Aug 19 00:13:41.631884 containerd[2014]: time="2025-08-19T00:13:41.631817698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bbd4d66-rjr5q,Uid:8766ae7b-04e2-43bb-8e63-39b0bb1900db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a59a491b3b025d27f181d917b675ad2e9e909b8d467687803e487d8b44f44c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.633658 kubelet[3438]: E0819 00:13:41.633593 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a59a491b3b025d27f181d917b675ad2e9e909b8d467687803e487d8b44f44c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.633797 kubelet[3438]: E0819 00:13:41.633678 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a59a491b3b025d27f181d917b675ad2e9e909b8d467687803e487d8b44f44c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85bbd4d66-rjr5q" Aug 19 00:13:41.633797 kubelet[3438]: E0819 00:13:41.633715 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a59a491b3b025d27f181d917b675ad2e9e909b8d467687803e487d8b44f44c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85bbd4d66-rjr5q" Aug 19 00:13:41.633898 kubelet[3438]: E0819 00:13:41.633794 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85bbd4d66-rjr5q_calico-system(8766ae7b-04e2-43bb-8e63-39b0bb1900db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85bbd4d66-rjr5q_calico-system(8766ae7b-04e2-43bb-8e63-39b0bb1900db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a59a491b3b025d27f181d917b675ad2e9e909b8d467687803e487d8b44f44c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85bbd4d66-rjr5q" podUID="8766ae7b-04e2-43bb-8e63-39b0bb1900db" Aug 19 00:13:41.655321 containerd[2014]: time="2025-08-19T00:13:41.655162174Z" level=error msg="Failed to destroy network for sandbox \"22d5d9a9faa819bfc44e284e4d0a2f18d903bd3c36337a59fe87f428b8ec4535\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.658266 containerd[2014]: time="2025-08-19T00:13:41.657824950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-fnmld,Uid:799803ff-968b-4bdb-a045-65b70869eec0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5d9a9faa819bfc44e284e4d0a2f18d903bd3c36337a59fe87f428b8ec4535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.659309 kubelet[3438]: E0819 00:13:41.659242 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5d9a9faa819bfc44e284e4d0a2f18d903bd3c36337a59fe87f428b8ec4535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.659718 kubelet[3438]: E0819 00:13:41.659445 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5d9a9faa819bfc44e284e4d0a2f18d903bd3c36337a59fe87f428b8ec4535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6554fc4f99-fnmld" Aug 19 00:13:41.659718 kubelet[3438]: E0819 00:13:41.659497 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d5d9a9faa819bfc44e284e4d0a2f18d903bd3c36337a59fe87f428b8ec4535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6554fc4f99-fnmld" Aug 19 00:13:41.661609 kubelet[3438]: E0819 00:13:41.659638 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6554fc4f99-fnmld_calico-apiserver(799803ff-968b-4bdb-a045-65b70869eec0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6554fc4f99-fnmld_calico-apiserver(799803ff-968b-4bdb-a045-65b70869eec0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22d5d9a9faa819bfc44e284e4d0a2f18d903bd3c36337a59fe87f428b8ec4535\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6554fc4f99-fnmld" podUID="799803ff-968b-4bdb-a045-65b70869eec0" Aug 19 00:13:41.666403 containerd[2014]: time="2025-08-19T00:13:41.666104398Z" level=error msg="Failed to destroy network for sandbox \"99c8f2e1dc2063239eb59736b57d64a767482c7a63d285091c041104711e9d40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.671716 containerd[2014]: time="2025-08-19T00:13:41.671640994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-j2skv,Uid:1ee75311-cda6-4d17-b2c7-d27304b8d82e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c8f2e1dc2063239eb59736b57d64a767482c7a63d285091c041104711e9d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.672287 kubelet[3438]: E0819 00:13:41.672225 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c8f2e1dc2063239eb59736b57d64a767482c7a63d285091c041104711e9d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.672418 kubelet[3438]: E0819 00:13:41.672309 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c8f2e1dc2063239eb59736b57d64a767482c7a63d285091c041104711e9d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6554fc4f99-j2skv" Aug 19 00:13:41.672418 kubelet[3438]: E0819 00:13:41.672344 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c8f2e1dc2063239eb59736b57d64a767482c7a63d285091c041104711e9d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6554fc4f99-j2skv" Aug 19 00:13:41.672567 kubelet[3438]: E0819 00:13:41.672439 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6554fc4f99-j2skv_calico-apiserver(1ee75311-cda6-4d17-b2c7-d27304b8d82e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6554fc4f99-j2skv_calico-apiserver(1ee75311-cda6-4d17-b2c7-d27304b8d82e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99c8f2e1dc2063239eb59736b57d64a767482c7a63d285091c041104711e9d40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6554fc4f99-j2skv" podUID="1ee75311-cda6-4d17-b2c7-d27304b8d82e" Aug 19 00:13:41.678243 containerd[2014]: time="2025-08-19T00:13:41.677658778Z" level=error msg="Failed to destroy network for sandbox \"34e0c7dd3386492d7d6ad54f5c7cc55e667083c7469d27f0a9699c59ad92caca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.681451 containerd[2014]: time="2025-08-19T00:13:41.681328258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8jbzv,Uid:43ff735c-411d-42fc-b611-fed61c27fec3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e0c7dd3386492d7d6ad54f5c7cc55e667083c7469d27f0a9699c59ad92caca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.683710 kubelet[3438]: E0819 00:13:41.681714 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e0c7dd3386492d7d6ad54f5c7cc55e667083c7469d27f0a9699c59ad92caca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.683710 kubelet[3438]: E0819 00:13:41.681786 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e0c7dd3386492d7d6ad54f5c7cc55e667083c7469d27f0a9699c59ad92caca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-8jbzv" Aug 19 00:13:41.683710 kubelet[3438]: E0819 00:13:41.681819 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e0c7dd3386492d7d6ad54f5c7cc55e667083c7469d27f0a9699c59ad92caca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-8jbzv" Aug 19 00:13:41.683909 kubelet[3438]: E0819 00:13:41.681906 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-8jbzv_calico-system(43ff735c-411d-42fc-b611-fed61c27fec3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-8jbzv_calico-system(43ff735c-411d-42fc-b611-fed61c27fec3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34e0c7dd3386492d7d6ad54f5c7cc55e667083c7469d27f0a9699c59ad92caca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-8jbzv" podUID="43ff735c-411d-42fc-b611-fed61c27fec3" Aug 19 00:13:41.685197 containerd[2014]: time="2025-08-19T00:13:41.684892474Z" level=error msg="Failed to destroy network for sandbox \"ccf5ae060c92cbdba99a48f7b5fd5edc64fcb1aa7aa0ad9be14a03fdcb3315b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.687288 containerd[2014]: time="2025-08-19T00:13:41.687183634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xnqj,Uid:b50a5f61-1810-4b90-b687-04d75015225f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccf5ae060c92cbdba99a48f7b5fd5edc64fcb1aa7aa0ad9be14a03fdcb3315b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.689308 kubelet[3438]: E0819 00:13:41.689243 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccf5ae060c92cbdba99a48f7b5fd5edc64fcb1aa7aa0ad9be14a03fdcb3315b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.689637 kubelet[3438]: E0819 00:13:41.689327 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccf5ae060c92cbdba99a48f7b5fd5edc64fcb1aa7aa0ad9be14a03fdcb3315b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:41.690111 kubelet[3438]: E0819 00:13:41.689366 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccf5ae060c92cbdba99a48f7b5fd5edc64fcb1aa7aa0ad9be14a03fdcb3315b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2xnqj" Aug 19 00:13:41.690547 kubelet[3438]: E0819 00:13:41.690154 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2xnqj_calico-system(b50a5f61-1810-4b90-b687-04d75015225f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2xnqj_calico-system(b50a5f61-1810-4b90-b687-04d75015225f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccf5ae060c92cbdba99a48f7b5fd5edc64fcb1aa7aa0ad9be14a03fdcb3315b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2xnqj" podUID="b50a5f61-1810-4b90-b687-04d75015225f" Aug 19 00:13:41.754339 containerd[2014]: time="2025-08-19T00:13:41.754256063Z" level=error msg="Failed to destroy network for sandbox \"1d52ba16ae8f188e1949bb721f19851c737ce9e4766ad282d266906cf98b1b45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.756197 containerd[2014]: time="2025-08-19T00:13:41.756137447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c789884f5-9dxs5,Uid:da07a4b3-8566-48fa-b7a2-02a956bdba39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d52ba16ae8f188e1949bb721f19851c737ce9e4766ad282d266906cf98b1b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.756609 kubelet[3438]: E0819 00:13:41.756542 3438 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d52ba16ae8f188e1949bb721f19851c737ce9e4766ad282d266906cf98b1b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:41.756734 kubelet[3438]: E0819 00:13:41.756641 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d52ba16ae8f188e1949bb721f19851c737ce9e4766ad282d266906cf98b1b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c789884f5-9dxs5" Aug 19 00:13:41.756734 kubelet[3438]: E0819 00:13:41.756702 3438 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d52ba16ae8f188e1949bb721f19851c737ce9e4766ad282d266906cf98b1b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c789884f5-9dxs5" Aug 19 00:13:41.757312 kubelet[3438]: E0819 00:13:41.756807 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c789884f5-9dxs5_calico-system(da07a4b3-8566-48fa-b7a2-02a956bdba39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c789884f5-9dxs5_calico-system(da07a4b3-8566-48fa-b7a2-02a956bdba39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d52ba16ae8f188e1949bb721f19851c737ce9e4766ad282d266906cf98b1b45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c789884f5-9dxs5" podUID="da07a4b3-8566-48fa-b7a2-02a956bdba39" Aug 19 00:13:42.064819 containerd[2014]: time="2025-08-19T00:13:42.061917800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 19 00:13:42.189332 systemd[1]: run-netns-cni\x2dafa8fa28\x2dd341\x2d4dfb\x2dca91\x2d9f072bb706e9.mount: Deactivated successfully. Aug 19 00:13:42.189529 systemd[1]: run-netns-cni\x2dc728bc5a\x2daf03\x2d865d\x2d9fc4\x2dd0b5d791e1b8.mount: Deactivated successfully. Aug 19 00:13:42.189653 systemd[1]: run-netns-cni\x2dc45b9289\x2dfc3b\x2dbd33\x2def5f\x2d09603f01ad0c.mount: Deactivated successfully. Aug 19 00:13:42.189772 systemd[1]: run-netns-cni\x2d699ce7e2\x2dfe1f\x2d246f\x2d5bf1\x2d74d0b451d263.mount: Deactivated successfully. Aug 19 00:13:49.784219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266429027.mount: Deactivated successfully. Aug 19 00:13:49.857914 containerd[2014]: time="2025-08-19T00:13:49.857795395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:49.859078 containerd[2014]: time="2025-08-19T00:13:49.858817195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 19 00:13:49.860253 containerd[2014]: time="2025-08-19T00:13:49.860131867Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:49.864749 containerd[2014]: time="2025-08-19T00:13:49.864667375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:49.865918 containerd[2014]: time="2025-08-19T00:13:49.865731487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 7.803752331s" Aug 19 00:13:49.865918 containerd[2014]: time="2025-08-19T00:13:49.865788967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 19 00:13:49.891745 containerd[2014]: time="2025-08-19T00:13:49.891667303Z" level=info msg="CreateContainer within sandbox \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 19 00:13:49.908495 containerd[2014]: time="2025-08-19T00:13:49.905433391Z" level=info msg="Container e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:49.924803 containerd[2014]: time="2025-08-19T00:13:49.924723763Z" level=info msg="CreateContainer within sandbox \"7e2d0ba602d4d742f83fb4aedca2feb032f6077166dbb5c466b7948414be7e6e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\"" Aug 19 00:13:49.926407 containerd[2014]: time="2025-08-19T00:13:49.926336167Z" level=info msg="StartContainer for \"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\"" Aug 19 00:13:49.930937 containerd[2014]: time="2025-08-19T00:13:49.930872359Z" level=info msg="connecting to shim e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f" address="unix:///run/containerd/s/6e800f24df024e335b17ca684d9bac2a169f4c0e98494381edd704c7b6934c14" protocol=ttrpc version=3 Aug 19 00:13:49.970718 systemd[1]: Started cri-containerd-e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f.scope - libcontainer container e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f. Aug 19 00:13:50.058193 containerd[2014]: time="2025-08-19T00:13:50.057878200Z" level=info msg="StartContainer for \"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" returns successfully" Aug 19 00:13:50.133735 kubelet[3438]: I0819 00:13:50.133428 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dm2ns" podStartSLOduration=0.937587541 podStartE2EDuration="19.132814684s" podCreationTimestamp="2025-08-19 00:13:31 +0000 UTC" firstStartedPulling="2025-08-19 00:13:31.6721743 +0000 UTC m=+27.144302919" lastFinishedPulling="2025-08-19 00:13:49.867401431 +0000 UTC m=+45.339530062" observedRunningTime="2025-08-19 00:13:50.130877512 +0000 UTC m=+45.603006155" watchObservedRunningTime="2025-08-19 00:13:50.132814684 +0000 UTC m=+45.604943315" Aug 19 00:13:50.381202 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 19 00:13:50.381339 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 19 00:13:50.392449 containerd[2014]: time="2025-08-19T00:13:50.392351861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" id:\"565798468722a63db428108d299ab1f330ac1b3c4cf214c08e1cd5e610c3772b\" pid:4429 exit_status:1 exited_at:{seconds:1755562430 nanos:391952177}" Aug 19 00:13:50.684557 kubelet[3438]: I0819 00:13:50.683659 3438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-backend-key-pair\") pod \"da07a4b3-8566-48fa-b7a2-02a956bdba39\" (UID: \"da07a4b3-8566-48fa-b7a2-02a956bdba39\") " Aug 19 00:13:50.684557 kubelet[3438]: I0819 00:13:50.683736 3438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdpp\" (UniqueName: \"kubernetes.io/projected/da07a4b3-8566-48fa-b7a2-02a956bdba39-kube-api-access-rzdpp\") pod \"da07a4b3-8566-48fa-b7a2-02a956bdba39\" (UID: \"da07a4b3-8566-48fa-b7a2-02a956bdba39\") " Aug 19 00:13:50.684557 kubelet[3438]: I0819 00:13:50.683788 3438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-ca-bundle\") pod \"da07a4b3-8566-48fa-b7a2-02a956bdba39\" (UID: \"da07a4b3-8566-48fa-b7a2-02a956bdba39\") " Aug 19 00:13:50.688522 kubelet[3438]: I0819 00:13:50.688463 3438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "da07a4b3-8566-48fa-b7a2-02a956bdba39" (UID: "da07a4b3-8566-48fa-b7a2-02a956bdba39"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 00:13:50.691797 kubelet[3438]: I0819 00:13:50.691672 3438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "da07a4b3-8566-48fa-b7a2-02a956bdba39" (UID: "da07a4b3-8566-48fa-b7a2-02a956bdba39"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 19 00:13:50.697184 kubelet[3438]: I0819 00:13:50.697115 3438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da07a4b3-8566-48fa-b7a2-02a956bdba39-kube-api-access-rzdpp" (OuterVolumeSpecName: "kube-api-access-rzdpp") pod "da07a4b3-8566-48fa-b7a2-02a956bdba39" (UID: "da07a4b3-8566-48fa-b7a2-02a956bdba39"). InnerVolumeSpecName "kube-api-access-rzdpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 00:13:50.722455 systemd[1]: Removed slice kubepods-besteffort-podda07a4b3_8566_48fa_b7a2_02a956bdba39.slice - libcontainer container kubepods-besteffort-podda07a4b3_8566_48fa_b7a2_02a956bdba39.slice. Aug 19 00:13:50.787821 kubelet[3438]: I0819 00:13:50.787762 3438 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-backend-key-pair\") on node \"ip-172-31-30-10\" DevicePath \"\"" Aug 19 00:13:50.788104 kubelet[3438]: I0819 00:13:50.788013 3438 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdpp\" (UniqueName: \"kubernetes.io/projected/da07a4b3-8566-48fa-b7a2-02a956bdba39-kube-api-access-rzdpp\") on node \"ip-172-31-30-10\" DevicePath \"\"" Aug 19 00:13:50.788104 kubelet[3438]: I0819 00:13:50.788048 3438 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da07a4b3-8566-48fa-b7a2-02a956bdba39-whisker-ca-bundle\") on node \"ip-172-31-30-10\" DevicePath \"\"" Aug 19 00:13:50.788423 systemd[1]: var-lib-kubelet-pods-da07a4b3\x2d8566\x2d48fa\x2db7a2\x2d02a956bdba39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drzdpp.mount: Deactivated successfully. Aug 19 00:13:50.788631 systemd[1]: var-lib-kubelet-pods-da07a4b3\x2d8566\x2d48fa\x2db7a2\x2d02a956bdba39-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 19 00:13:51.220333 systemd[1]: Created slice kubepods-besteffort-pod130abcb0_c774_4128_8b53_e23ef5250b76.slice - libcontainer container kubepods-besteffort-pod130abcb0_c774_4128_8b53_e23ef5250b76.slice. Aug 19 00:13:51.293884 kubelet[3438]: I0819 00:13:51.293779 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wx6\" (UniqueName: \"kubernetes.io/projected/130abcb0-c774-4128-8b53-e23ef5250b76-kube-api-access-z6wx6\") pod \"whisker-57d577bcbf-lq2p8\" (UID: \"130abcb0-c774-4128-8b53-e23ef5250b76\") " pod="calico-system/whisker-57d577bcbf-lq2p8" Aug 19 00:13:51.294737 kubelet[3438]: I0819 00:13:51.294686 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/130abcb0-c774-4128-8b53-e23ef5250b76-whisker-ca-bundle\") pod \"whisker-57d577bcbf-lq2p8\" (UID: \"130abcb0-c774-4128-8b53-e23ef5250b76\") " pod="calico-system/whisker-57d577bcbf-lq2p8" Aug 19 00:13:51.295003 kubelet[3438]: I0819 00:13:51.294949 3438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/130abcb0-c774-4128-8b53-e23ef5250b76-whisker-backend-key-pair\") pod \"whisker-57d577bcbf-lq2p8\" (UID: \"130abcb0-c774-4128-8b53-e23ef5250b76\") " pod="calico-system/whisker-57d577bcbf-lq2p8" Aug 19 00:13:51.343812 containerd[2014]: time="2025-08-19T00:13:51.343705470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" id:\"f26ee7cb21747864a6bacc01e8c8312b9d5df2e57b1f1dae1d3350c7bbbdda37\" pid:4490 exit_status:1 exited_at:{seconds:1755562431 nanos:342963150}" Aug 19 00:13:51.530454 containerd[2014]: time="2025-08-19T00:13:51.529777651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d577bcbf-lq2p8,Uid:130abcb0-c774-4128-8b53-e23ef5250b76,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:51.826719 systemd-networkd[1892]: cali0dd7e9dcabd: Link UP Aug 19 00:13:51.827075 systemd-networkd[1892]: cali0dd7e9dcabd: Gained carrier Aug 19 00:13:51.828805 (udev-worker)[4450]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:13:51.858265 containerd[2014]: 2025-08-19 00:13:51.578 [INFO][4505] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 00:13:51.858265 containerd[2014]: 2025-08-19 00:13:51.662 [INFO][4505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0 whisker-57d577bcbf- calico-system 130abcb0-c774-4128-8b53-e23ef5250b76 914 0 2025-08-19 00:13:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57d577bcbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-10 whisker-57d577bcbf-lq2p8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0dd7e9dcabd [] [] }} ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-" Aug 19 00:13:51.858265 containerd[2014]: 2025-08-19 00:13:51.662 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.858265 containerd[2014]: 2025-08-19 00:13:51.747 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" HandleID="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Workload="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.748 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" HandleID="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Workload="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003241a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-10", "pod":"whisker-57d577bcbf-lq2p8", "timestamp":"2025-08-19 00:13:51.747802772 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.748 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.748 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.748 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.764 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" host="ip-172-31-30-10" Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.772 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.779 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.782 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:51.858670 containerd[2014]: 2025-08-19 00:13:51.786 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.786 [INFO][4516] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" host="ip-172-31-30-10" Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.789 [INFO][4516] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.798 [INFO][4516] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" host="ip-172-31-30-10" Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.806 [INFO][4516] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.193/26] block=192.168.82.192/26 handle="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" host="ip-172-31-30-10" Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.807 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.193/26] handle="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" host="ip-172-31-30-10" Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.807 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:51.859111 containerd[2014]: 2025-08-19 00:13:51.807 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.193/26] IPv6=[] ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" HandleID="k8s-pod-network.0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Workload="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.859614 containerd[2014]: 2025-08-19 00:13:51.815 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0", GenerateName:"whisker-57d577bcbf-", Namespace:"calico-system", SelfLink:"", UID:"130abcb0-c774-4128-8b53-e23ef5250b76", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d577bcbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"whisker-57d577bcbf-lq2p8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0dd7e9dcabd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:51.859614 containerd[2014]: 2025-08-19 00:13:51.815 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.193/32] ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.859791 containerd[2014]: 2025-08-19 00:13:51.815 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dd7e9dcabd ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.859791 containerd[2014]: 2025-08-19 00:13:51.828 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.859899 containerd[2014]: 2025-08-19 00:13:51.829 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0", GenerateName:"whisker-57d577bcbf-", Namespace:"calico-system", SelfLink:"", UID:"130abcb0-c774-4128-8b53-e23ef5250b76", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d577bcbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa", Pod:"whisker-57d577bcbf-lq2p8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0dd7e9dcabd", MAC:"3a:9d:39:12:78:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:51.860006 containerd[2014]: 2025-08-19 00:13:51.850 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" Namespace="calico-system" Pod="whisker-57d577bcbf-lq2p8" WorkloadEndpoint="ip--172--31--30--10-k8s-whisker--57d577bcbf--lq2p8-eth0" Aug 19 00:13:51.893455 containerd[2014]: time="2025-08-19T00:13:51.893243289Z" level=info msg="connecting to shim 0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa" address="unix:///run/containerd/s/344acfef63e52f64b454cb406db9b6baf1ae476b9359f4aab68875e7d4b5d812" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:51.947687 systemd[1]: Started cri-containerd-0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa.scope - libcontainer container 0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa. Aug 19 00:13:52.072956 containerd[2014]: time="2025-08-19T00:13:52.072887850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d577bcbf-lq2p8,Uid:130abcb0-c774-4128-8b53-e23ef5250b76,Namespace:calico-system,Attempt:0,} returns sandbox id \"0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa\"" Aug 19 00:13:52.078612 containerd[2014]: time="2025-08-19T00:13:52.078460926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 19 00:13:52.126301 kubelet[3438]: I0819 00:13:52.125714 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:13:52.702729 containerd[2014]: time="2025-08-19T00:13:52.702648969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-fnmld,Uid:799803ff-968b-4bdb-a045-65b70869eec0,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:52.707628 containerd[2014]: time="2025-08-19T00:13:52.705024453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8jbzv,Uid:43ff735c-411d-42fc-b611-fed61c27fec3,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:52.721079 kubelet[3438]: I0819 00:13:52.721030 3438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da07a4b3-8566-48fa-b7a2-02a956bdba39" path="/var/lib/kubelet/pods/da07a4b3-8566-48fa-b7a2-02a956bdba39/volumes" Aug 19 00:13:52.916753 systemd-networkd[1892]: cali0dd7e9dcabd: Gained IPv6LL Aug 19 00:13:53.035810 systemd-networkd[1892]: cali0eefa38b5c0: Link UP Aug 19 00:13:53.038755 systemd-networkd[1892]: cali0eefa38b5c0: Gained carrier Aug 19 00:13:53.071527 containerd[2014]: 2025-08-19 00:13:52.816 [INFO][4668] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 00:13:53.071527 containerd[2014]: 2025-08-19 00:13:52.859 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0 calico-apiserver-6554fc4f99- calico-apiserver 799803ff-968b-4bdb-a045-65b70869eec0 841 0 2025-08-19 00:13:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6554fc4f99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-10 calico-apiserver-6554fc4f99-fnmld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0eefa38b5c0 [] [] }} ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-" Aug 19 00:13:53.071527 containerd[2014]: 2025-08-19 00:13:52.860 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.071527 containerd[2014]: 2025-08-19 00:13:52.956 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" HandleID="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Workload="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.957 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" HandleID="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Workload="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-10", "pod":"calico-apiserver-6554fc4f99-fnmld", "timestamp":"2025-08-19 00:13:52.955864714 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.957 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.957 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.957 [INFO][4698] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.978 [INFO][4698] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" host="ip-172-31-30-10" Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.985 [INFO][4698] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.995 [INFO][4698] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:52.998 [INFO][4698] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:53.072850 containerd[2014]: 2025-08-19 00:13:53.003 [INFO][4698] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.003 [INFO][4698] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" host="ip-172-31-30-10" Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.006 [INFO][4698] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.013 [INFO][4698] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" host="ip-172-31-30-10" Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.022 [INFO][4698] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.194/26] block=192.168.82.192/26 handle="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" host="ip-172-31-30-10" Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.023 [INFO][4698] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.194/26] handle="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" host="ip-172-31-30-10" Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.023 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:53.073281 containerd[2014]: 2025-08-19 00:13:53.023 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.194/26] IPv6=[] ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" HandleID="k8s-pod-network.fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Workload="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.076980 containerd[2014]: 2025-08-19 00:13:53.028 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0", GenerateName:"calico-apiserver-6554fc4f99-", Namespace:"calico-apiserver", SelfLink:"", UID:"799803ff-968b-4bdb-a045-65b70869eec0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6554fc4f99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"calico-apiserver-6554fc4f99-fnmld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0eefa38b5c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:53.077670 containerd[2014]: 2025-08-19 00:13:53.029 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.194/32] ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.077670 containerd[2014]: 2025-08-19 00:13:53.029 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eefa38b5c0 ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.077670 containerd[2014]: 2025-08-19 00:13:53.038 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.078299 containerd[2014]: 2025-08-19 00:13:53.040 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0", GenerateName:"calico-apiserver-6554fc4f99-", Namespace:"calico-apiserver", SelfLink:"", UID:"799803ff-968b-4bdb-a045-65b70869eec0", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6554fc4f99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf", Pod:"calico-apiserver-6554fc4f99-fnmld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0eefa38b5c0", MAC:"42:32:9a:4c:e8:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:53.079216 containerd[2014]: 2025-08-19 00:13:53.056 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-fnmld" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--fnmld-eth0" Aug 19 00:13:53.155352 containerd[2014]: time="2025-08-19T00:13:53.155281651Z" level=info msg="connecting to shim fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf" address="unix:///run/containerd/s/47778ee38f698cd7a934b9f38501dfbfd682a23842f241aa28cb87bd1bedd28a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:53.247914 systemd[1]: Started cri-containerd-fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf.scope - libcontainer container fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf. Aug 19 00:13:53.421337 systemd-networkd[1892]: cali3d450a980e3: Link UP Aug 19 00:13:53.422768 systemd-networkd[1892]: cali3d450a980e3: Gained carrier Aug 19 00:13:53.491612 containerd[2014]: 2025-08-19 00:13:53.205 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0 goldmane-58fd7646b9- calico-system 43ff735c-411d-42fc-b611-fed61c27fec3 842 0 2025-08-19 00:13:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-10 goldmane-58fd7646b9-8jbzv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3d450a980e3 [] [] }} ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-" Aug 19 00:13:53.491612 containerd[2014]: 2025-08-19 00:13:53.206 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.491612 containerd[2014]: 2025-08-19 00:13:53.292 [INFO][4767] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" HandleID="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Workload="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.293 [INFO][4767] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" HandleID="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Workload="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d990), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-10", "pod":"goldmane-58fd7646b9-8jbzv", "timestamp":"2025-08-19 00:13:53.292634264 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.293 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.294 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.294 [INFO][4767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.311 [INFO][4767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" host="ip-172-31-30-10" Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.327 [INFO][4767] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.352 [INFO][4767] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.360 [INFO][4767] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:53.492969 containerd[2014]: 2025-08-19 00:13:53.368 [INFO][4767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.369 [INFO][4767] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" host="ip-172-31-30-10" Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.373 [INFO][4767] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60 Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.384 [INFO][4767] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" host="ip-172-31-30-10" Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.405 [INFO][4767] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.195/26] block=192.168.82.192/26 handle="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" host="ip-172-31-30-10" Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.405 [INFO][4767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.195/26] handle="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" host="ip-172-31-30-10" Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.406 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:53.493520 containerd[2014]: 2025-08-19 00:13:53.406 [INFO][4767] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.195/26] IPv6=[] ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" HandleID="k8s-pod-network.a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Workload="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.493941 containerd[2014]: 2025-08-19 00:13:53.414 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"43ff735c-411d-42fc-b611-fed61c27fec3", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"goldmane-58fd7646b9-8jbzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d450a980e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:53.493941 containerd[2014]: 2025-08-19 00:13:53.414 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.195/32] ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.494128 containerd[2014]: 2025-08-19 00:13:53.414 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d450a980e3 ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.494128 containerd[2014]: 2025-08-19 00:13:53.423 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.494231 containerd[2014]: 2025-08-19 00:13:53.431 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"43ff735c-411d-42fc-b611-fed61c27fec3", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60", Pod:"goldmane-58fd7646b9-8jbzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3d450a980e3", MAC:"fa:6d:50:b7:e9:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:53.494352 containerd[2014]: 2025-08-19 00:13:53.470 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" Namespace="calico-system" Pod="goldmane-58fd7646b9-8jbzv" WorkloadEndpoint="ip--172--31--30--10-k8s-goldmane--58fd7646b9--8jbzv-eth0" Aug 19 00:13:53.565080 containerd[2014]: time="2025-08-19T00:13:53.564928785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-fnmld,Uid:799803ff-968b-4bdb-a045-65b70869eec0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf\"" Aug 19 00:13:53.583399 containerd[2014]: time="2025-08-19T00:13:53.583301925Z" level=info msg="connecting to shim a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60" address="unix:///run/containerd/s/f92ea6a45d92d10ca987af9d80e752e2b7923000d9cd11a0230a87bf51828269" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:53.687886 systemd[1]: Started cri-containerd-a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60.scope - libcontainer container a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60. Aug 19 00:13:53.706719 containerd[2014]: time="2025-08-19T00:13:53.706489486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xnqj,Uid:b50a5f61-1810-4b90-b687-04d75015225f,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:53.807082 (udev-worker)[4451]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:13:53.826768 systemd-networkd[1892]: vxlan.calico: Link UP Aug 19 00:13:53.826790 systemd-networkd[1892]: vxlan.calico: Gained carrier Aug 19 00:13:53.959708 containerd[2014]: time="2025-08-19T00:13:53.959532515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-8jbzv,Uid:43ff735c-411d-42fc-b611-fed61c27fec3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60\"" Aug 19 00:13:54.028917 containerd[2014]: time="2025-08-19T00:13:54.028859156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:54.030892 containerd[2014]: time="2025-08-19T00:13:54.030311996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 19 00:13:54.031557 containerd[2014]: time="2025-08-19T00:13:54.031488032Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:54.044278 containerd[2014]: time="2025-08-19T00:13:54.044209820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:54.049032 containerd[2014]: time="2025-08-19T00:13:54.047526332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.968757678s" Aug 19 00:13:54.049032 containerd[2014]: time="2025-08-19T00:13:54.048026972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 19 00:13:54.054150 containerd[2014]: time="2025-08-19T00:13:54.054105440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:13:54.059887 containerd[2014]: time="2025-08-19T00:13:54.058059692Z" level=info msg="CreateContainer within sandbox \"0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 19 00:13:54.074153 containerd[2014]: time="2025-08-19T00:13:54.074095892Z" level=info msg="Container 6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:54.106122 containerd[2014]: time="2025-08-19T00:13:54.104976236Z" level=info msg="CreateContainer within sandbox \"0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538\"" Aug 19 00:13:54.114063 containerd[2014]: time="2025-08-19T00:13:54.114014732Z" level=info msg="StartContainer for \"6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538\"" Aug 19 00:13:54.127561 containerd[2014]: time="2025-08-19T00:13:54.126280364Z" level=info msg="connecting to shim 6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538" address="unix:///run/containerd/s/344acfef63e52f64b454cb406db9b6baf1ae476b9359f4aab68875e7d4b5d812" protocol=ttrpc version=3 Aug 19 00:13:54.195761 systemd[1]: Started cri-containerd-6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538.scope - libcontainer container 6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538. Aug 19 00:13:54.244395 systemd-networkd[1892]: calie6fd9d1279f: Link UP Aug 19 00:13:54.250975 systemd-networkd[1892]: calie6fd9d1279f: Gained carrier Aug 19 00:13:54.284234 containerd[2014]: 2025-08-19 00:13:54.003 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0 csi-node-driver- calico-system b50a5f61-1810-4b90-b687-04d75015225f 728 0 2025-08-19 00:13:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-10 csi-node-driver-2xnqj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie6fd9d1279f [] [] }} ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-" Aug 19 00:13:54.284234 containerd[2014]: 2025-08-19 00:13:54.003 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.284234 containerd[2014]: 2025-08-19 00:13:54.100 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" HandleID="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Workload="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.100 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" HandleID="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Workload="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003303c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-10", "pod":"csi-node-driver-2xnqj", "timestamp":"2025-08-19 00:13:54.100067072 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.101 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.101 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.102 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.131 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" host="ip-172-31-30-10" Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.152 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.165 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.170 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.176 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:54.284579 containerd[2014]: 2025-08-19 00:13:54.176 [INFO][4891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" host="ip-172-31-30-10" Aug 19 00:13:54.285030 containerd[2014]: 2025-08-19 00:13:54.180 [INFO][4891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886 Aug 19 00:13:54.285030 containerd[2014]: 2025-08-19 00:13:54.194 [INFO][4891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" host="ip-172-31-30-10" Aug 19 00:13:54.285030 containerd[2014]: 2025-08-19 00:13:54.213 [INFO][4891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.196/26] block=192.168.82.192/26 handle="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" host="ip-172-31-30-10" Aug 19 00:13:54.285030 containerd[2014]: 2025-08-19 00:13:54.213 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.196/26] handle="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" host="ip-172-31-30-10" Aug 19 00:13:54.285030 containerd[2014]: 2025-08-19 00:13:54.214 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:54.285030 containerd[2014]: 2025-08-19 00:13:54.214 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.196/26] IPv6=[] ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" HandleID="k8s-pod-network.2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Workload="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.285286 containerd[2014]: 2025-08-19 00:13:54.225 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b50a5f61-1810-4b90-b687-04d75015225f", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"csi-node-driver-2xnqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie6fd9d1279f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.285420 containerd[2014]: 2025-08-19 00:13:54.226 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.196/32] ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.285420 containerd[2014]: 2025-08-19 00:13:54.226 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6fd9d1279f ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.285420 containerd[2014]: 2025-08-19 00:13:54.254 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.285576 containerd[2014]: 2025-08-19 00:13:54.258 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b50a5f61-1810-4b90-b687-04d75015225f", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886", Pod:"csi-node-driver-2xnqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie6fd9d1279f", MAC:"7e:e3:bc:60:e6:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.285697 containerd[2014]: 2025-08-19 00:13:54.278 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" Namespace="calico-system" Pod="csi-node-driver-2xnqj" WorkloadEndpoint="ip--172--31--30--10-k8s-csi--node--driver--2xnqj-eth0" Aug 19 00:13:54.330876 containerd[2014]: time="2025-08-19T00:13:54.330800109Z" level=info msg="connecting to shim 2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886" address="unix:///run/containerd/s/cd9a301fd05ce66d8acd26354b192da6311d796ff54b9cafc74ad7e3cda2b746" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:54.389279 systemd-networkd[1892]: cali0eefa38b5c0: Gained IPv6LL Aug 19 00:13:54.393813 systemd[1]: Started cri-containerd-2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886.scope - libcontainer container 2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886. Aug 19 00:13:54.427508 containerd[2014]: time="2025-08-19T00:13:54.426733882Z" level=info msg="StartContainer for \"6a06fd7c5f992548d6153881409251215e7fd00d61adf53a13612c2eb3d4e538\" returns successfully" Aug 19 00:13:54.488772 containerd[2014]: time="2025-08-19T00:13:54.488679094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xnqj,Uid:b50a5f61-1810-4b90-b687-04d75015225f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886\"" Aug 19 00:13:54.515630 systemd-networkd[1892]: cali3d450a980e3: Gained IPv6LL Aug 19 00:13:54.701826 containerd[2014]: time="2025-08-19T00:13:54.701758235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-j2skv,Uid:1ee75311-cda6-4d17-b2c7-d27304b8d82e,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:54.702880 containerd[2014]: time="2025-08-19T00:13:54.702670247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kcg8,Uid:eb9c15ce-3378-4741-a684-0a8392a9106d,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:54.703036 containerd[2014]: time="2025-08-19T00:13:54.702920075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bbd4d66-rjr5q,Uid:8766ae7b-04e2-43bb-8e63-39b0bb1900db,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:54.703713 containerd[2014]: time="2025-08-19T00:13:54.703632743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2j9hm,Uid:0457175a-30ff-4c6f-8d8a-6f85a6adf32c,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:55.156626 systemd-networkd[1892]: vxlan.calico: Gained IPv6LL Aug 19 00:13:55.194155 systemd-networkd[1892]: cali0a4721b07dc: Link UP Aug 19 00:13:55.195877 systemd-networkd[1892]: cali0a4721b07dc: Gained carrier Aug 19 00:13:55.244516 containerd[2014]: 2025-08-19 00:13:54.898 [INFO][5023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0 coredns-7c65d6cfc9- kube-system eb9c15ce-3378-4741-a684-0a8392a9106d 843 0 2025-08-19 00:13:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-10 coredns-7c65d6cfc9-4kcg8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0a4721b07dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-" Aug 19 00:13:55.244516 containerd[2014]: 2025-08-19 00:13:54.902 [INFO][5023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.244516 containerd[2014]: 2025-08-19 00:13:55.092 [INFO][5079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" HandleID="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Workload="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.093 [INFO][5079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" HandleID="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Workload="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b45f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-10", "pod":"coredns-7c65d6cfc9-4kcg8", "timestamp":"2025-08-19 00:13:55.092108757 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.093 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.093 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.093 [INFO][5079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.111 [INFO][5079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" host="ip-172-31-30-10" Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.127 [INFO][5079] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.137 [INFO][5079] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.141 [INFO][5079] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.146 [INFO][5079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.246599 containerd[2014]: 2025-08-19 00:13:55.146 [INFO][5079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" host="ip-172-31-30-10" Aug 19 00:13:55.247802 containerd[2014]: 2025-08-19 00:13:55.149 [INFO][5079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e Aug 19 00:13:55.247802 containerd[2014]: 2025-08-19 00:13:55.159 [INFO][5079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" host="ip-172-31-30-10" Aug 19 00:13:55.247802 containerd[2014]: 2025-08-19 00:13:55.176 [INFO][5079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.197/26] block=192.168.82.192/26 handle="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" host="ip-172-31-30-10" Aug 19 00:13:55.247802 containerd[2014]: 2025-08-19 00:13:55.177 [INFO][5079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.197/26] handle="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" host="ip-172-31-30-10" Aug 19 00:13:55.247802 containerd[2014]: 2025-08-19 00:13:55.177 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:55.247802 containerd[2014]: 2025-08-19 00:13:55.179 [INFO][5079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.197/26] IPv6=[] ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" HandleID="k8s-pod-network.66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Workload="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.248103 containerd[2014]: 2025-08-19 00:13:55.188 [INFO][5023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb9c15ce-3378-4741-a684-0a8392a9106d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"coredns-7c65d6cfc9-4kcg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a4721b07dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:55.248103 containerd[2014]: 2025-08-19 00:13:55.188 [INFO][5023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.197/32] ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.248103 containerd[2014]: 2025-08-19 00:13:55.188 [INFO][5023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a4721b07dc ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.248103 containerd[2014]: 2025-08-19 00:13:55.198 [INFO][5023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.248103 containerd[2014]: 2025-08-19 00:13:55.200 [INFO][5023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb9c15ce-3378-4741-a684-0a8392a9106d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e", Pod:"coredns-7c65d6cfc9-4kcg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a4721b07dc", MAC:"0e:09:a2:f7:66:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:55.248103 containerd[2014]: 2025-08-19 00:13:55.233 [INFO][5023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kcg8" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--4kcg8-eth0" Aug 19 00:13:55.309580 containerd[2014]: time="2025-08-19T00:13:55.309142042Z" level=info msg="connecting to shim 66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e" address="unix:///run/containerd/s/78ceb198ed865b0acee549ea139e0b5f58223b49572d81a025699c48e35d61f9" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:55.381562 systemd-networkd[1892]: cali5fbf945e879: Link UP Aug 19 00:13:55.397179 systemd-networkd[1892]: cali5fbf945e879: Gained carrier Aug 19 00:13:55.462728 systemd[1]: Started cri-containerd-66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e.scope - libcontainer container 66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e. Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:54.944 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0 coredns-7c65d6cfc9- kube-system 0457175a-30ff-4c6f-8d8a-6f85a6adf32c 833 0 2025-08-19 00:13:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-10 coredns-7c65d6cfc9-2j9hm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fbf945e879 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:54.944 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.092 [INFO][5087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" HandleID="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Workload="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.093 [INFO][5087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" HandleID="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Workload="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-10", "pod":"coredns-7c65d6cfc9-2j9hm", "timestamp":"2025-08-19 00:13:55.092340717 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.094 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.178 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.179 [INFO][5087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.220 [INFO][5087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.240 [INFO][5087] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.257 [INFO][5087] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.262 [INFO][5087] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.269 [INFO][5087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.271 [INFO][5087] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.280 [INFO][5087] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.305 [INFO][5087] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.338 [INFO][5087] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.198/26] block=192.168.82.192/26 handle="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.339 [INFO][5087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.198/26] handle="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" host="ip-172-31-30-10" Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.339 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:55.483851 containerd[2014]: 2025-08-19 00:13:55.340 [INFO][5087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.198/26] IPv6=[] ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" HandleID="k8s-pod-network.db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Workload="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.488159 containerd[2014]: 2025-08-19 00:13:55.353 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0457175a-30ff-4c6f-8d8a-6f85a6adf32c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"coredns-7c65d6cfc9-2j9hm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fbf945e879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:55.488159 containerd[2014]: 2025-08-19 00:13:55.353 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.198/32] ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.488159 containerd[2014]: 2025-08-19 00:13:55.353 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fbf945e879 ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.488159 containerd[2014]: 2025-08-19 00:13:55.415 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.488159 containerd[2014]: 2025-08-19 00:13:55.420 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0457175a-30ff-4c6f-8d8a-6f85a6adf32c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b", Pod:"coredns-7c65d6cfc9-2j9hm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fbf945e879", MAC:"22:5d:13:3b:72:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:55.488159 containerd[2014]: 2025-08-19 00:13:55.472 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2j9hm" WorkloadEndpoint="ip--172--31--30--10-k8s-coredns--7c65d6cfc9--2j9hm-eth0" Aug 19 00:13:55.539646 systemd-networkd[1892]: calie6fd9d1279f: Gained IPv6LL Aug 19 00:13:55.590668 containerd[2014]: time="2025-08-19T00:13:55.589648907Z" level=info msg="connecting to shim db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b" address="unix:///run/containerd/s/77bd0b943bb89c11fa5b8f6f0884f055e457350de33bc5d8d28a54c4af50e3e4" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:55.651931 systemd-networkd[1892]: calid3b6bf4cc9b: Link UP Aug 19 00:13:55.663057 systemd-networkd[1892]: calid3b6bf4cc9b: Gained carrier Aug 19 00:13:55.703038 systemd[1]: Started cri-containerd-db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b.scope - libcontainer container db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b. Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:54.973 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0 calico-kube-controllers-85bbd4d66- calico-system 8766ae7b-04e2-43bb-8e63-39b0bb1900db 845 0 2025-08-19 00:13:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85bbd4d66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-10 calico-kube-controllers-85bbd4d66-rjr5q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3b6bf4cc9b [] [] }} ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:54.974 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.099 [INFO][5095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" HandleID="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Workload="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.099 [INFO][5095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" HandleID="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Workload="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400062e280), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-10", "pod":"calico-kube-controllers-85bbd4d66-rjr5q", "timestamp":"2025-08-19 00:13:55.099370401 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.099 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.343 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.344 [INFO][5095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.442 [INFO][5095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.477 [INFO][5095] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.502 [INFO][5095] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.511 [INFO][5095] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.525 [INFO][5095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.525 [INFO][5095] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.535 [INFO][5095] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2 Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.558 [INFO][5095] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.598 [INFO][5095] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.199/26] block=192.168.82.192/26 handle="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.599 [INFO][5095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.199/26] handle="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" host="ip-172-31-30-10" Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.601 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:55.798091 containerd[2014]: 2025-08-19 00:13:55.603 [INFO][5095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.199/26] IPv6=[] ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" HandleID="k8s-pod-network.c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Workload="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.801282 containerd[2014]: 2025-08-19 00:13:55.620 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0", GenerateName:"calico-kube-controllers-85bbd4d66-", Namespace:"calico-system", SelfLink:"", UID:"8766ae7b-04e2-43bb-8e63-39b0bb1900db", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bbd4d66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"calico-kube-controllers-85bbd4d66-rjr5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3b6bf4cc9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:55.801282 containerd[2014]: 2025-08-19 00:13:55.622 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.199/32] ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.801282 containerd[2014]: 2025-08-19 00:13:55.623 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3b6bf4cc9b ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.801282 containerd[2014]: 2025-08-19 00:13:55.697 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.801282 containerd[2014]: 2025-08-19 00:13:55.705 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0", GenerateName:"calico-kube-controllers-85bbd4d66-", Namespace:"calico-system", SelfLink:"", UID:"8766ae7b-04e2-43bb-8e63-39b0bb1900db", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bbd4d66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2", Pod:"calico-kube-controllers-85bbd4d66-rjr5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3b6bf4cc9b", MAC:"ba:3e:be:16:38:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:55.801282 containerd[2014]: 2025-08-19 00:13:55.763 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" Namespace="calico-system" Pod="calico-kube-controllers-85bbd4d66-rjr5q" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--kube--controllers--85bbd4d66--rjr5q-eth0" Aug 19 00:13:55.919967 containerd[2014]: time="2025-08-19T00:13:55.919868833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kcg8,Uid:eb9c15ce-3378-4741-a684-0a8392a9106d,Namespace:kube-system,Attempt:0,} returns sandbox id \"66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e\"" Aug 19 00:13:56.026101 systemd-networkd[1892]: calib91b273ec18: Link UP Aug 19 00:13:56.029860 containerd[2014]: time="2025-08-19T00:13:56.028069869Z" level=info msg="CreateContainer within sandbox \"66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:13:56.042353 systemd-networkd[1892]: calib91b273ec18: Gained carrier Aug 19 00:13:56.060117 containerd[2014]: time="2025-08-19T00:13:56.059843230Z" level=info msg="connecting to shim c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2" address="unix:///run/containerd/s/2d18c1f80056ff20e0bee2f478a2c4a7404bf1d76a5063723e465b8997c5cba4" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:56.122852 containerd[2014]: time="2025-08-19T00:13:56.121523434Z" level=info msg="Container 084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:54.939 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0 calico-apiserver-6554fc4f99- calico-apiserver 1ee75311-cda6-4d17-b2c7-d27304b8d82e 838 0 2025-08-19 00:13:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6554fc4f99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-10 calico-apiserver-6554fc4f99-j2skv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib91b273ec18 [] [] }} ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:54.939 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.100 [INFO][5085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" HandleID="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Workload="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.100 [INFO][5085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" HandleID="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Workload="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000422750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-10", "pod":"calico-apiserver-6554fc4f99-j2skv", "timestamp":"2025-08-19 00:13:55.100504533 +0000 UTC"}, Hostname:"ip-172-31-30-10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.101 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.600 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.600 [INFO][5085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-10' Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.731 [INFO][5085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.769 [INFO][5085] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.820 [INFO][5085] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.836 [INFO][5085] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.854 [INFO][5085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.855 [INFO][5085] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.867 [INFO][5085] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4 Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.891 [INFO][5085] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.927 [INFO][5085] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.200/26] block=192.168.82.192/26 handle="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.927 [INFO][5085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.200/26] handle="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" host="ip-172-31-30-10" Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.929 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:56.138785 containerd[2014]: 2025-08-19 00:13:55.931 [INFO][5085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.200/26] IPv6=[] ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" HandleID="k8s-pod-network.d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Workload="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.140119 containerd[2014]: 2025-08-19 00:13:55.977 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0", GenerateName:"calico-apiserver-6554fc4f99-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ee75311-cda6-4d17-b2c7-d27304b8d82e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6554fc4f99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"", Pod:"calico-apiserver-6554fc4f99-j2skv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib91b273ec18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.140119 containerd[2014]: 2025-08-19 00:13:55.978 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.200/32] ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.140119 containerd[2014]: 2025-08-19 00:13:55.978 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib91b273ec18 ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.140119 containerd[2014]: 2025-08-19 00:13:56.045 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.140119 containerd[2014]: 2025-08-19 00:13:56.046 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0", GenerateName:"calico-apiserver-6554fc4f99-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ee75311-cda6-4d17-b2c7-d27304b8d82e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6554fc4f99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-10", ContainerID:"d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4", Pod:"calico-apiserver-6554fc4f99-j2skv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib91b273ec18", MAC:"a6:07:90:2f:91:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.140119 containerd[2014]: 2025-08-19 00:13:56.102 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" Namespace="calico-apiserver" Pod="calico-apiserver-6554fc4f99-j2skv" WorkloadEndpoint="ip--172--31--30--10-k8s-calico--apiserver--6554fc4f99--j2skv-eth0" Aug 19 00:13:56.149854 systemd[1]: Started cri-containerd-c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2.scope - libcontainer container c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2. Aug 19 00:13:56.170114 containerd[2014]: time="2025-08-19T00:13:56.170033650Z" level=info msg="CreateContainer within sandbox \"66fda8daa5b200d0ac2014bdd0d68faa4595707c8f6bd29dfdc59428aef4084e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4\"" Aug 19 00:13:56.173818 containerd[2014]: time="2025-08-19T00:13:56.172675378Z" level=info msg="StartContainer for \"084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4\"" Aug 19 00:13:56.186148 containerd[2014]: time="2025-08-19T00:13:56.186075670Z" level=info msg="connecting to shim 084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4" address="unix:///run/containerd/s/78ceb198ed865b0acee549ea139e0b5f58223b49572d81a025699c48e35d61f9" protocol=ttrpc version=3 Aug 19 00:13:56.242635 containerd[2014]: time="2025-08-19T00:13:56.242546231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2j9hm,Uid:0457175a-30ff-4c6f-8d8a-6f85a6adf32c,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b\"" Aug 19 00:13:56.260966 containerd[2014]: time="2025-08-19T00:13:56.260191463Z" level=info msg="CreateContainer within sandbox \"db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:13:56.331348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769628683.mount: Deactivated successfully. Aug 19 00:13:56.349679 containerd[2014]: time="2025-08-19T00:13:56.348671363Z" level=info msg="Container 56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:56.360978 containerd[2014]: time="2025-08-19T00:13:56.360134591Z" level=info msg="connecting to shim d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4" address="unix:///run/containerd/s/da869786147392e2e05d2f2f7716f9fe2d2d388969879975e9eb89633be2eea8" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:56.381878 systemd[1]: Started cri-containerd-084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4.scope - libcontainer container 084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4. Aug 19 00:13:56.394691 containerd[2014]: time="2025-08-19T00:13:56.393097091Z" level=info msg="CreateContainer within sandbox \"db4316837ab2d28df20da6f9f4c514657addef75ffb611bccb8177532c2f307b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d\"" Aug 19 00:13:56.398508 containerd[2014]: time="2025-08-19T00:13:56.396062519Z" level=info msg="StartContainer for \"56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d\"" Aug 19 00:13:56.414482 containerd[2014]: time="2025-08-19T00:13:56.414362555Z" level=info msg="connecting to shim 56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d" address="unix:///run/containerd/s/77bd0b943bb89c11fa5b8f6f0884f055e457350de33bc5d8d28a54c4af50e3e4" protocol=ttrpc version=3 Aug 19 00:13:56.474085 systemd[1]: Started cri-containerd-d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4.scope - libcontainer container d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4. Aug 19 00:13:56.499779 systemd-networkd[1892]: cali0a4721b07dc: Gained IPv6LL Aug 19 00:13:56.563786 systemd[1]: Started cri-containerd-56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d.scope - libcontainer container 56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d. Aug 19 00:13:56.603325 containerd[2014]: time="2025-08-19T00:13:56.603044076Z" level=info msg="StartContainer for \"084dae790eb82f81720a692a35d2cd2f6c5076ffaeb61018e69d1cef9dfc4fd4\" returns successfully" Aug 19 00:13:56.756149 containerd[2014]: time="2025-08-19T00:13:56.755142841Z" level=info msg="StartContainer for \"56ec874884f45354fbfd11cfdd728fee2c5066d68228b13665a361ed88e5b48d\" returns successfully" Aug 19 00:13:56.809669 containerd[2014]: time="2025-08-19T00:13:56.809596273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bbd4d66-rjr5q,Uid:8766ae7b-04e2-43bb-8e63-39b0bb1900db,Namespace:calico-system,Attempt:0,} returns sandbox id \"c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2\"" Aug 19 00:13:56.883665 systemd-networkd[1892]: cali5fbf945e879: Gained IPv6LL Aug 19 00:13:56.947669 systemd-networkd[1892]: calid3b6bf4cc9b: Gained IPv6LL Aug 19 00:13:57.112033 containerd[2014]: time="2025-08-19T00:13:57.111877103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6554fc4f99-j2skv,Uid:1ee75311-cda6-4d17-b2c7-d27304b8d82e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4\"" Aug 19 00:13:57.133842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673808021.mount: Deactivated successfully. Aug 19 00:13:57.413691 kubelet[3438]: I0819 00:13:57.413503 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4kcg8" podStartSLOduration=47.413476764 podStartE2EDuration="47.413476764s" podCreationTimestamp="2025-08-19 00:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:57.319909248 +0000 UTC m=+52.792037927" watchObservedRunningTime="2025-08-19 00:13:57.413476764 +0000 UTC m=+52.885605383" Aug 19 00:13:57.552221 kubelet[3438]: I0819 00:13:57.552105 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2j9hm" podStartSLOduration=47.552081481 podStartE2EDuration="47.552081481s" podCreationTimestamp="2025-08-19 00:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:57.530429881 +0000 UTC m=+53.002558524" watchObservedRunningTime="2025-08-19 00:13:57.552081481 +0000 UTC m=+53.024210100" Aug 19 00:13:57.843796 systemd-networkd[1892]: calib91b273ec18: Gained IPv6LL Aug 19 00:13:59.123672 containerd[2014]: time="2025-08-19T00:13:59.123449881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:59.144205 containerd[2014]: time="2025-08-19T00:13:59.144131845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 19 00:13:59.183412 containerd[2014]: time="2025-08-19T00:13:59.182942005Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:59.230083 containerd[2014]: time="2025-08-19T00:13:59.230024449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:59.231944 containerd[2014]: time="2025-08-19T00:13:59.231874741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 5.176755217s" Aug 19 00:13:59.232183 containerd[2014]: time="2025-08-19T00:13:59.232115797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:13:59.236191 containerd[2014]: time="2025-08-19T00:13:59.236107909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 19 00:13:59.239410 containerd[2014]: time="2025-08-19T00:13:59.239317249Z" level=info msg="CreateContainer within sandbox \"fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:13:59.344416 containerd[2014]: time="2025-08-19T00:13:59.341654294Z" level=info msg="Container 004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:59.359208 containerd[2014]: time="2025-08-19T00:13:59.359156774Z" level=info msg="CreateContainer within sandbox \"fa733285b98f49efb3a7014e88fc88e58a984335219f8f5b7b91ddd4f45b95cf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901\"" Aug 19 00:13:59.361853 containerd[2014]: time="2025-08-19T00:13:59.361781642Z" level=info msg="StartContainer for \"004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901\"" Aug 19 00:13:59.365788 containerd[2014]: time="2025-08-19T00:13:59.365687426Z" level=info msg="connecting to shim 004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901" address="unix:///run/containerd/s/47778ee38f698cd7a934b9f38501dfbfd682a23842f241aa28cb87bd1bedd28a" protocol=ttrpc version=3 Aug 19 00:13:59.407748 systemd[1]: Started cri-containerd-004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901.scope - libcontainer container 004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901. Aug 19 00:13:59.490236 containerd[2014]: time="2025-08-19T00:13:59.490190619Z" level=info msg="StartContainer for \"004d3f06a00ea986d8cf04b686a0c19551494173969ab4efab7ccda313011901\" returns successfully" Aug 19 00:14:00.330267 kubelet[3438]: I0819 00:14:00.329134 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6554fc4f99-fnmld" podStartSLOduration=30.664294271 podStartE2EDuration="36.329107647s" podCreationTimestamp="2025-08-19 00:13:24 +0000 UTC" firstStartedPulling="2025-08-19 00:13:53.569841453 +0000 UTC m=+49.041970060" lastFinishedPulling="2025-08-19 00:13:59.234654805 +0000 UTC m=+54.706783436" observedRunningTime="2025-08-19 00:14:00.327207495 +0000 UTC m=+55.799336126" watchObservedRunningTime="2025-08-19 00:14:00.329107647 +0000 UTC m=+55.801236446" Aug 19 00:14:00.412941 ntpd[1979]: Listen normally on 7 vxlan.calico 192.168.82.192:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 7 vxlan.calico 192.168.82.192:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 8 cali0dd7e9dcabd [fe80::ecee:eeff:feee:eeee%4]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 9 cali0eefa38b5c0 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 10 cali3d450a980e3 [fe80::ecee:eeff:feee:eeee%6]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 11 vxlan.calico [fe80::641b:38ff:fe6d:8952%7]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 12 calie6fd9d1279f [fe80::ecee:eeff:feee:eeee%10]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 13 cali0a4721b07dc [fe80::ecee:eeff:feee:eeee%11]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 14 cali5fbf945e879 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 15 calid3b6bf4cc9b [fe80::ecee:eeff:feee:eeee%13]:123 Aug 19 00:14:00.415998 ntpd[1979]: 19 Aug 00:14:00 ntpd[1979]: Listen normally on 16 calib91b273ec18 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 19 00:14:00.414573 ntpd[1979]: Listen normally on 8 cali0dd7e9dcabd [fe80::ecee:eeff:feee:eeee%4]:123 Aug 19 00:14:00.414651 ntpd[1979]: Listen normally on 9 cali0eefa38b5c0 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 19 00:14:00.414716 ntpd[1979]: Listen normally on 10 cali3d450a980e3 [fe80::ecee:eeff:feee:eeee%6]:123 Aug 19 00:14:00.414779 ntpd[1979]: Listen normally on 11 vxlan.calico [fe80::641b:38ff:fe6d:8952%7]:123 Aug 19 00:14:00.414850 ntpd[1979]: Listen normally on 12 calie6fd9d1279f [fe80::ecee:eeff:feee:eeee%10]:123 Aug 19 00:14:00.414911 ntpd[1979]: Listen normally on 13 cali0a4721b07dc [fe80::ecee:eeff:feee:eeee%11]:123 Aug 19 00:14:00.414972 ntpd[1979]: Listen normally on 14 cali5fbf945e879 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 19 00:14:00.415032 ntpd[1979]: Listen normally on 15 calid3b6bf4cc9b [fe80::ecee:eeff:feee:eeee%13]:123 Aug 19 00:14:00.415097 ntpd[1979]: Listen normally on 16 calib91b273ec18 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 19 00:14:01.306400 kubelet[3438]: I0819 00:14:01.306303 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:02.130959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376057142.mount: Deactivated successfully. Aug 19 00:14:03.276395 containerd[2014]: time="2025-08-19T00:14:03.275888789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:03.279162 containerd[2014]: time="2025-08-19T00:14:03.279095057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 19 00:14:03.281409 containerd[2014]: time="2025-08-19T00:14:03.281105441Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:03.288402 containerd[2014]: time="2025-08-19T00:14:03.288276054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:03.291151 containerd[2014]: time="2025-08-19T00:14:03.290675190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 4.054288509s" Aug 19 00:14:03.291151 containerd[2014]: time="2025-08-19T00:14:03.290767050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 19 00:14:03.293894 containerd[2014]: time="2025-08-19T00:14:03.293769078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 19 00:14:03.296464 containerd[2014]: time="2025-08-19T00:14:03.296020782Z" level=info msg="CreateContainer within sandbox \"a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 19 00:14:03.314402 containerd[2014]: time="2025-08-19T00:14:03.313005534Z" level=info msg="Container e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:03.338423 containerd[2014]: time="2025-08-19T00:14:03.338323266Z" level=info msg="CreateContainer within sandbox \"a1f6a70d13941d49359541966bfd270404f64d331746fa12823d1ce88f6f8d60\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\"" Aug 19 00:14:03.340097 containerd[2014]: time="2025-08-19T00:14:03.339958710Z" level=info msg="StartContainer for \"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\"" Aug 19 00:14:03.345630 containerd[2014]: time="2025-08-19T00:14:03.345514230Z" level=info msg="connecting to shim e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85" address="unix:///run/containerd/s/f92ea6a45d92d10ca987af9d80e752e2b7923000d9cd11a0230a87bf51828269" protocol=ttrpc version=3 Aug 19 00:14:03.430694 systemd[1]: Started cri-containerd-e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85.scope - libcontainer container e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85. Aug 19 00:14:03.514565 containerd[2014]: time="2025-08-19T00:14:03.514426495Z" level=info msg="StartContainer for \"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" returns successfully" Aug 19 00:14:04.369069 kubelet[3438]: I0819 00:14:04.366814 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-8jbzv" podStartSLOduration=24.04098788 podStartE2EDuration="33.366788119s" podCreationTimestamp="2025-08-19 00:13:31 +0000 UTC" firstStartedPulling="2025-08-19 00:13:53.967442867 +0000 UTC m=+49.439571486" lastFinishedPulling="2025-08-19 00:14:03.293243034 +0000 UTC m=+58.765371725" observedRunningTime="2025-08-19 00:14:04.363331555 +0000 UTC m=+59.835460198" watchObservedRunningTime="2025-08-19 00:14:04.366788119 +0000 UTC m=+59.838916750" Aug 19 00:14:04.550181 containerd[2014]: time="2025-08-19T00:14:04.549950912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"5c9ccb3c5394bdfa5daab67414c3fbe162d53756bd4ef2b12e1dfc612f016a7b\" pid:5515 exit_status:1 exited_at:{seconds:1755562444 nanos:546180968}" Aug 19 00:14:04.730759 containerd[2014]: time="2025-08-19T00:14:04.730167753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" id:\"24a183def27d87db555d9c75ce502e54236039c816a115a0fd43e9746fa58c90\" pid:5539 exited_at:{seconds:1755562444 nanos:729574425}" Aug 19 00:14:05.467136 containerd[2014]: time="2025-08-19T00:14:05.467068892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"41b6ffe7e9dc2dcbdea9e5cce48a1e6ae26421e21421818b6342005d1bad5412\" pid:5570 exit_status:1 exited_at:{seconds:1755562445 nanos:466278176}" Aug 19 00:14:07.689742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611911274.mount: Deactivated successfully. Aug 19 00:14:07.725802 containerd[2014]: time="2025-08-19T00:14:07.725727684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:07.728291 containerd[2014]: time="2025-08-19T00:14:07.728219748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 19 00:14:07.730346 containerd[2014]: time="2025-08-19T00:14:07.730218060Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:07.740843 containerd[2014]: time="2025-08-19T00:14:07.740748084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:07.743407 containerd[2014]: time="2025-08-19T00:14:07.743110260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.449276718s" Aug 19 00:14:07.743407 containerd[2014]: time="2025-08-19T00:14:07.743174412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 19 00:14:07.748500 containerd[2014]: time="2025-08-19T00:14:07.748402788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 19 00:14:07.750258 containerd[2014]: time="2025-08-19T00:14:07.750192300Z" level=info msg="CreateContainer within sandbox \"0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 19 00:14:07.777735 containerd[2014]: time="2025-08-19T00:14:07.777664980Z" level=info msg="Container 41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:07.803072 containerd[2014]: time="2025-08-19T00:14:07.802921020Z" level=info msg="CreateContainer within sandbox \"0822ab8779b873a9cdb5707933a1f99891c465291ed0c8cf1948a72eaed851fa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8\"" Aug 19 00:14:07.804178 containerd[2014]: time="2025-08-19T00:14:07.804117660Z" level=info msg="StartContainer for \"41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8\"" Aug 19 00:14:07.808520 containerd[2014]: time="2025-08-19T00:14:07.808449828Z" level=info msg="connecting to shim 41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8" address="unix:///run/containerd/s/344acfef63e52f64b454cb406db9b6baf1ae476b9359f4aab68875e7d4b5d812" protocol=ttrpc version=3 Aug 19 00:14:07.902274 systemd[1]: Started cri-containerd-41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8.scope - libcontainer container 41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8. Aug 19 00:14:08.059063 containerd[2014]: time="2025-08-19T00:14:08.058784457Z" level=info msg="StartContainer for \"41a5e57b21dac4b45bc0d287b71ba28d13213222a3d516b623435131a3deb6a8\" returns successfully" Aug 19 00:14:08.576022 systemd[1]: Started sshd@9-172.31.30.10:22-147.75.109.163:49342.service - OpenSSH per-connection server daemon (147.75.109.163:49342). Aug 19 00:14:08.794586 sshd[5634]: Accepted publickey for core from 147.75.109.163 port 49342 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:08.799544 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:08.815408 systemd-logind[1986]: New session 10 of user core. Aug 19 00:14:08.823419 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 00:14:09.273456 sshd[5637]: Connection closed by 147.75.109.163 port 49342 Aug 19 00:14:09.277281 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:09.290350 systemd[1]: sshd@9-172.31.30.10:22-147.75.109.163:49342.service: Deactivated successfully. Aug 19 00:14:09.302228 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 00:14:09.309159 systemd-logind[1986]: Session 10 logged out. Waiting for processes to exit. Aug 19 00:14:09.314454 systemd-logind[1986]: Removed session 10. Aug 19 00:14:09.486510 containerd[2014]: time="2025-08-19T00:14:09.486430884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:09.489452 containerd[2014]: time="2025-08-19T00:14:09.488585604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 19 00:14:09.493404 containerd[2014]: time="2025-08-19T00:14:09.491283312Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:09.501402 containerd[2014]: time="2025-08-19T00:14:09.500791164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:09.509700 containerd[2014]: time="2025-08-19T00:14:09.509539920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.761069188s" Aug 19 00:14:09.509700 containerd[2014]: time="2025-08-19T00:14:09.509636352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 19 00:14:09.513672 containerd[2014]: time="2025-08-19T00:14:09.513303384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 19 00:14:09.517548 containerd[2014]: time="2025-08-19T00:14:09.517495032Z" level=info msg="CreateContainer within sandbox \"2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 19 00:14:09.572486 containerd[2014]: time="2025-08-19T00:14:09.572133481Z" level=info msg="Container 0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:09.601510 containerd[2014]: time="2025-08-19T00:14:09.601429261Z" level=info msg="CreateContainer within sandbox \"2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723\"" Aug 19 00:14:09.603897 containerd[2014]: time="2025-08-19T00:14:09.603839161Z" level=info msg="StartContainer for \"0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723\"" Aug 19 00:14:09.609930 containerd[2014]: time="2025-08-19T00:14:09.609856105Z" level=info msg="connecting to shim 0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723" address="unix:///run/containerd/s/cd9a301fd05ce66d8acd26354b192da6311d796ff54b9cafc74ad7e3cda2b746" protocol=ttrpc version=3 Aug 19 00:14:09.666645 systemd[1]: Started cri-containerd-0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723.scope - libcontainer container 0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723. Aug 19 00:14:09.896643 containerd[2014]: time="2025-08-19T00:14:09.896590334Z" level=info msg="StartContainer for \"0849e84bb90b2169dafc5514c8ed9b133405e7f86983a18a88ef0208702e7723\" returns successfully" Aug 19 00:14:11.010734 containerd[2014]: time="2025-08-19T00:14:11.010651452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"81e6ee734c4063e0ce51c9d82064713cb8b182c2cfc1af899f0711680ee95679\" pid:5699 exit_status:1 exited_at:{seconds:1755562451 nanos:10193712}" Aug 19 00:14:14.325032 systemd[1]: Started sshd@10-172.31.30.10:22-147.75.109.163:49348.service - OpenSSH per-connection server daemon (147.75.109.163:49348). Aug 19 00:14:14.626026 sshd[5716]: Accepted publickey for core from 147.75.109.163 port 49348 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:14.632988 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:14.647885 systemd-logind[1986]: New session 11 of user core. Aug 19 00:14:14.653740 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 00:14:15.136866 sshd[5719]: Connection closed by 147.75.109.163 port 49348 Aug 19 00:14:15.140716 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:15.152265 systemd[1]: sshd@10-172.31.30.10:22-147.75.109.163:49348.service: Deactivated successfully. Aug 19 00:14:15.165093 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 00:14:15.168900 systemd-logind[1986]: Session 11 logged out. Waiting for processes to exit. Aug 19 00:14:15.173942 systemd-logind[1986]: Removed session 11. Aug 19 00:14:15.817767 containerd[2014]: time="2025-08-19T00:14:15.817695332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:15.821266 containerd[2014]: time="2025-08-19T00:14:15.821190668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 19 00:14:15.825412 containerd[2014]: time="2025-08-19T00:14:15.824013980Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:15.834137 containerd[2014]: time="2025-08-19T00:14:15.834067268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:15.837265 containerd[2014]: time="2025-08-19T00:14:15.835812320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 6.322449416s" Aug 19 00:14:15.837523 containerd[2014]: time="2025-08-19T00:14:15.837490748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 19 00:14:15.840694 containerd[2014]: time="2025-08-19T00:14:15.840552692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:14:15.886779 containerd[2014]: time="2025-08-19T00:14:15.886676240Z" level=info msg="CreateContainer within sandbox \"c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 19 00:14:15.928730 containerd[2014]: time="2025-08-19T00:14:15.928670696Z" level=info msg="Container ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:15.952047 containerd[2014]: time="2025-08-19T00:14:15.951984740Z" level=info msg="CreateContainer within sandbox \"c887fd74ba078e4351fc7ceab63372e8d3ca3db7bf903d8662e50b1e194feaf2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\"" Aug 19 00:14:15.954328 containerd[2014]: time="2025-08-19T00:14:15.954261584Z" level=info msg="StartContainer for \"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\"" Aug 19 00:14:15.961083 containerd[2014]: time="2025-08-19T00:14:15.960891884Z" level=info msg="connecting to shim ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf" address="unix:///run/containerd/s/2d18c1f80056ff20e0bee2f478a2c4a7404bf1d76a5063723e465b8997c5cba4" protocol=ttrpc version=3 Aug 19 00:14:16.034320 systemd[1]: Started cri-containerd-ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf.scope - libcontainer container ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf. Aug 19 00:14:16.216497 containerd[2014]: time="2025-08-19T00:14:16.216237294Z" level=info msg="StartContainer for \"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" returns successfully" Aug 19 00:14:16.244288 containerd[2014]: time="2025-08-19T00:14:16.243713718Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:16.244985 containerd[2014]: time="2025-08-19T00:14:16.244913022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 19 00:14:16.263997 containerd[2014]: time="2025-08-19T00:14:16.263899002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 422.537774ms" Aug 19 00:14:16.264321 containerd[2014]: time="2025-08-19T00:14:16.264262350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:14:16.271410 containerd[2014]: time="2025-08-19T00:14:16.271301166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 19 00:14:16.272109 containerd[2014]: time="2025-08-19T00:14:16.272059350Z" level=info msg="CreateContainer within sandbox \"d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:14:16.292119 containerd[2014]: time="2025-08-19T00:14:16.292052274Z" level=info msg="Container 8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:16.325399 containerd[2014]: time="2025-08-19T00:14:16.324144834Z" level=info msg="CreateContainer within sandbox \"d414405e11b72c78b82dbb118db1bf3389a4ef63105eecf2e9206d4f0a9bbbc4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147\"" Aug 19 00:14:16.326816 containerd[2014]: time="2025-08-19T00:14:16.326750778Z" level=info msg="StartContainer for \"8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147\"" Aug 19 00:14:16.337128 containerd[2014]: time="2025-08-19T00:14:16.336959574Z" level=info msg="connecting to shim 8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147" address="unix:///run/containerd/s/da869786147392e2e05d2f2f7716f9fe2d2d388969879975e9eb89633be2eea8" protocol=ttrpc version=3 Aug 19 00:14:16.390561 systemd[1]: Started cri-containerd-8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147.scope - libcontainer container 8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147. Aug 19 00:14:16.473288 kubelet[3438]: I0819 00:14:16.472425 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85bbd4d66-rjr5q" podStartSLOduration=26.450911292 podStartE2EDuration="45.471733123s" podCreationTimestamp="2025-08-19 00:13:31 +0000 UTC" firstStartedPulling="2025-08-19 00:13:56.819205321 +0000 UTC m=+52.291333928" lastFinishedPulling="2025-08-19 00:14:15.840027056 +0000 UTC m=+71.312155759" observedRunningTime="2025-08-19 00:14:16.467901763 +0000 UTC m=+71.940030382" watchObservedRunningTime="2025-08-19 00:14:16.471733123 +0000 UTC m=+71.943861754" Aug 19 00:14:16.475987 kubelet[3438]: I0819 00:14:16.474000 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57d577bcbf-lq2p8" podStartSLOduration=9.803019329 podStartE2EDuration="25.473974243s" podCreationTimestamp="2025-08-19 00:13:51 +0000 UTC" firstStartedPulling="2025-08-19 00:13:52.075556986 +0000 UTC m=+47.547685605" lastFinishedPulling="2025-08-19 00:14:07.746511888 +0000 UTC m=+63.218640519" observedRunningTime="2025-08-19 00:14:08.394509455 +0000 UTC m=+63.866638182" watchObservedRunningTime="2025-08-19 00:14:16.473974243 +0000 UTC m=+71.946102934" Aug 19 00:14:16.565407 containerd[2014]: time="2025-08-19T00:14:16.563988379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"01428d80104731188295b23d797c85beece0ddc66b1d8a36081f2e7927b70408\" pid:5810 exit_status:1 exited_at:{seconds:1755562456 nanos:563506111}" Aug 19 00:14:16.656991 containerd[2014]: time="2025-08-19T00:14:16.656908628Z" level=info msg="StartContainer for \"8fc4eed100542d2c03862990413f5949e99953344ac98bf300622615c761a147\" returns successfully" Aug 19 00:14:17.702801 containerd[2014]: time="2025-08-19T00:14:17.702713781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"2a97a537ee27b072152018b5bb5296773aef19124431cc5ab27d3d16d158c6fd\" pid:5852 exited_at:{seconds:1755562457 nanos:701810613}" Aug 19 00:14:17.806931 kubelet[3438]: I0819 00:14:17.806838 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6554fc4f99-j2skv" podStartSLOduration=34.654837135 podStartE2EDuration="53.806814466s" podCreationTimestamp="2025-08-19 00:13:24 +0000 UTC" firstStartedPulling="2025-08-19 00:13:57.114343427 +0000 UTC m=+52.586472046" lastFinishedPulling="2025-08-19 00:14:16.266320686 +0000 UTC m=+71.738449377" observedRunningTime="2025-08-19 00:14:17.495536228 +0000 UTC m=+72.967664859" watchObservedRunningTime="2025-08-19 00:14:17.806814466 +0000 UTC m=+73.278943085" Aug 19 00:14:18.206624 containerd[2014]: time="2025-08-19T00:14:18.206509688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:18.209179 containerd[2014]: time="2025-08-19T00:14:18.208883156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 19 00:14:18.210739 containerd[2014]: time="2025-08-19T00:14:18.210685208Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:18.216401 containerd[2014]: time="2025-08-19T00:14:18.216099176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:18.218409 containerd[2014]: time="2025-08-19T00:14:18.218288888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.946917594s" Aug 19 00:14:18.218709 containerd[2014]: time="2025-08-19T00:14:18.218665352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 19 00:14:18.226816 containerd[2014]: time="2025-08-19T00:14:18.226763900Z" level=info msg="CreateContainer within sandbox \"2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 19 00:14:18.250861 containerd[2014]: time="2025-08-19T00:14:18.250790228Z" level=info msg="Container 13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:18.272647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190628321.mount: Deactivated successfully. Aug 19 00:14:18.283125 containerd[2014]: time="2025-08-19T00:14:18.283052372Z" level=info msg="CreateContainer within sandbox \"2dea522ee5e4d29999afab84ea93dc8da4087bdf1dba52d96e9944340b06d886\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e\"" Aug 19 00:14:18.284727 containerd[2014]: time="2025-08-19T00:14:18.284510420Z" level=info msg="StartContainer for \"13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e\"" Aug 19 00:14:18.291680 containerd[2014]: time="2025-08-19T00:14:18.291610628Z" level=info msg="connecting to shim 13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e" address="unix:///run/containerd/s/cd9a301fd05ce66d8acd26354b192da6311d796ff54b9cafc74ad7e3cda2b746" protocol=ttrpc version=3 Aug 19 00:14:18.362011 systemd[1]: Started cri-containerd-13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e.scope - libcontainer container 13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e. Aug 19 00:14:18.423395 kubelet[3438]: I0819 00:14:18.423173 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:18.991676 containerd[2014]: time="2025-08-19T00:14:18.991436580Z" level=info msg="StartContainer for \"13a67bfef811b824ba817ce728c988e2408548d0bf4592679f022bd2eff3961e\" returns successfully" Aug 19 00:14:19.485142 kubelet[3438]: I0819 00:14:19.484178 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:20.043592 kubelet[3438]: I0819 00:14:20.041354 3438 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 19 00:14:20.043852 kubelet[3438]: I0819 00:14:20.043827 3438 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 19 00:14:20.199494 systemd[1]: Started sshd@11-172.31.30.10:22-147.75.109.163:56742.service - OpenSSH per-connection server daemon (147.75.109.163:56742). Aug 19 00:14:20.426656 sshd[5904]: Accepted publickey for core from 147.75.109.163 port 56742 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:20.430357 sshd-session[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:20.444056 systemd-logind[1986]: New session 12 of user core. Aug 19 00:14:20.449641 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 00:14:20.844677 sshd[5907]: Connection closed by 147.75.109.163 port 56742 Aug 19 00:14:20.845611 sshd-session[5904]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:20.857264 systemd[1]: sshd@11-172.31.30.10:22-147.75.109.163:56742.service: Deactivated successfully. Aug 19 00:14:20.857795 systemd-logind[1986]: Session 12 logged out. Waiting for processes to exit. Aug 19 00:14:20.863639 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 00:14:20.888023 systemd[1]: Started sshd@12-172.31.30.10:22-147.75.109.163:56756.service - OpenSSH per-connection server daemon (147.75.109.163:56756). Aug 19 00:14:20.891462 systemd-logind[1986]: Removed session 12. Aug 19 00:14:21.102478 sshd[5920]: Accepted publickey for core from 147.75.109.163 port 56756 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:21.106627 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:21.120896 systemd-logind[1986]: New session 13 of user core. Aug 19 00:14:21.128831 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 00:14:21.565573 sshd[5923]: Connection closed by 147.75.109.163 port 56756 Aug 19 00:14:21.565990 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:21.576807 systemd[1]: sshd@12-172.31.30.10:22-147.75.109.163:56756.service: Deactivated successfully. Aug 19 00:14:21.583231 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 00:14:21.589260 systemd-logind[1986]: Session 13 logged out. Waiting for processes to exit. Aug 19 00:14:21.618335 systemd[1]: Started sshd@13-172.31.30.10:22-147.75.109.163:56772.service - OpenSSH per-connection server daemon (147.75.109.163:56772). Aug 19 00:14:21.621539 systemd-logind[1986]: Removed session 13. Aug 19 00:14:21.847243 sshd[5933]: Accepted publickey for core from 147.75.109.163 port 56772 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:21.851787 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:21.863474 systemd-logind[1986]: New session 14 of user core. Aug 19 00:14:21.871067 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 00:14:22.257545 sshd[5936]: Connection closed by 147.75.109.163 port 56772 Aug 19 00:14:22.258036 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:22.269908 systemd-logind[1986]: Session 14 logged out. Waiting for processes to exit. Aug 19 00:14:22.271729 systemd[1]: sshd@13-172.31.30.10:22-147.75.109.163:56772.service: Deactivated successfully. Aug 19 00:14:22.283214 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 00:14:22.286921 systemd-logind[1986]: Removed session 14. Aug 19 00:14:22.854068 kubelet[3438]: I0819 00:14:22.853975 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2xnqj" podStartSLOduration=28.125267569 podStartE2EDuration="51.853927479s" podCreationTimestamp="2025-08-19 00:13:31 +0000 UTC" firstStartedPulling="2025-08-19 00:13:54.494112418 +0000 UTC m=+49.966241025" lastFinishedPulling="2025-08-19 00:14:18.222772316 +0000 UTC m=+73.694900935" observedRunningTime="2025-08-19 00:14:19.508324498 +0000 UTC m=+74.980453129" watchObservedRunningTime="2025-08-19 00:14:22.853927479 +0000 UTC m=+78.326056110" Aug 19 00:14:27.300856 systemd[1]: Started sshd@14-172.31.30.10:22-147.75.109.163:56774.service - OpenSSH per-connection server daemon (147.75.109.163:56774). Aug 19 00:14:27.517448 sshd[5958]: Accepted publickey for core from 147.75.109.163 port 56774 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:27.522048 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:27.533525 systemd-logind[1986]: New session 15 of user core. Aug 19 00:14:27.540688 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 00:14:27.873425 sshd[5961]: Connection closed by 147.75.109.163 port 56774 Aug 19 00:14:27.874348 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:27.885798 systemd[1]: sshd@14-172.31.30.10:22-147.75.109.163:56774.service: Deactivated successfully. Aug 19 00:14:27.892709 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 00:14:27.896544 systemd-logind[1986]: Session 15 logged out. Waiting for processes to exit. Aug 19 00:14:27.899907 systemd-logind[1986]: Removed session 15. Aug 19 00:14:32.914853 systemd[1]: Started sshd@15-172.31.30.10:22-147.75.109.163:57942.service - OpenSSH per-connection server daemon (147.75.109.163:57942). Aug 19 00:14:33.128835 sshd[5973]: Accepted publickey for core from 147.75.109.163 port 57942 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:33.132768 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:33.146706 systemd-logind[1986]: New session 16 of user core. Aug 19 00:14:33.155975 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 00:14:33.441579 sshd[5976]: Connection closed by 147.75.109.163 port 57942 Aug 19 00:14:33.444132 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:33.451663 systemd[1]: sshd@15-172.31.30.10:22-147.75.109.163:57942.service: Deactivated successfully. Aug 19 00:14:33.458507 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 00:14:33.461542 systemd-logind[1986]: Session 16 logged out. Waiting for processes to exit. Aug 19 00:14:33.466115 systemd-logind[1986]: Removed session 16. Aug 19 00:14:34.831314 containerd[2014]: time="2025-08-19T00:14:34.831247226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" id:\"5e2d2b8d52562b9ede2b89cc8dda4b45bf5b18ab1b4281e1c06e2c39bff3128b\" pid:6000 exited_at:{seconds:1755562474 nanos:830807018}" Aug 19 00:14:38.482857 systemd[1]: Started sshd@16-172.31.30.10:22-147.75.109.163:46598.service - OpenSSH per-connection server daemon (147.75.109.163:46598). Aug 19 00:14:38.688569 sshd[6019]: Accepted publickey for core from 147.75.109.163 port 46598 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:38.691754 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:38.704871 systemd-logind[1986]: New session 17 of user core. Aug 19 00:14:38.714722 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 00:14:38.997410 sshd[6022]: Connection closed by 147.75.109.163 port 46598 Aug 19 00:14:38.998313 sshd-session[6019]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:39.008299 systemd[1]: sshd@16-172.31.30.10:22-147.75.109.163:46598.service: Deactivated successfully. Aug 19 00:14:39.015268 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 00:14:39.018816 systemd-logind[1986]: Session 17 logged out. Waiting for processes to exit. Aug 19 00:14:39.024597 systemd-logind[1986]: Removed session 17. Aug 19 00:14:40.443059 containerd[2014]: time="2025-08-19T00:14:40.442915146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"726fd31886f933a9efd16bcf8128647c05a5201ec2501213f200b8bb04bd4b83\" pid:6047 exited_at:{seconds:1755562480 nanos:442456242}" Aug 19 00:14:41.005091 containerd[2014]: time="2025-08-19T00:14:41.005021561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"cfad034e4a4885458dc1ef1c22decd51a7c0b0bb66b94a5143b9b462cd1640fe\" pid:6070 exited_at:{seconds:1755562481 nanos:4620329}" Aug 19 00:14:41.259282 containerd[2014]: time="2025-08-19T00:14:41.258761022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"4baa1c6de82ec5d9cbc81ad9df4d0e66f726843b3adb94368ae3327bfd3cb8ad\" pid:6089 exited_at:{seconds:1755562481 nanos:258296334}" Aug 19 00:14:44.037464 systemd[1]: Started sshd@17-172.31.30.10:22-147.75.109.163:46610.service - OpenSSH per-connection server daemon (147.75.109.163:46610). Aug 19 00:14:44.258265 sshd[6108]: Accepted publickey for core from 147.75.109.163 port 46610 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:44.261007 sshd-session[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:44.276668 systemd-logind[1986]: New session 18 of user core. Aug 19 00:14:44.288407 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 00:14:44.591418 sshd[6111]: Connection closed by 147.75.109.163 port 46610 Aug 19 00:14:44.590047 sshd-session[6108]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:44.602242 systemd[1]: sshd@17-172.31.30.10:22-147.75.109.163:46610.service: Deactivated successfully. Aug 19 00:14:44.610730 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 00:14:44.633045 systemd-logind[1986]: Session 18 logged out. Waiting for processes to exit. Aug 19 00:14:44.636914 systemd[1]: Started sshd@18-172.31.30.10:22-147.75.109.163:46626.service - OpenSSH per-connection server daemon (147.75.109.163:46626). Aug 19 00:14:44.641252 systemd-logind[1986]: Removed session 18. Aug 19 00:14:44.841246 sshd[6123]: Accepted publickey for core from 147.75.109.163 port 46626 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:44.843972 sshd-session[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:44.857980 systemd-logind[1986]: New session 19 of user core. Aug 19 00:14:44.865688 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 00:14:45.498133 sshd[6126]: Connection closed by 147.75.109.163 port 46626 Aug 19 00:14:45.499175 sshd-session[6123]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:45.509493 systemd-logind[1986]: Session 19 logged out. Waiting for processes to exit. Aug 19 00:14:45.511503 systemd[1]: sshd@18-172.31.30.10:22-147.75.109.163:46626.service: Deactivated successfully. Aug 19 00:14:45.518542 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 00:14:45.547587 systemd[1]: Started sshd@19-172.31.30.10:22-147.75.109.163:46634.service - OpenSSH per-connection server daemon (147.75.109.163:46634). Aug 19 00:14:45.548349 systemd-logind[1986]: Removed session 19. Aug 19 00:14:45.787483 sshd[6136]: Accepted publickey for core from 147.75.109.163 port 46634 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:45.789261 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:45.802631 systemd-logind[1986]: New session 20 of user core. Aug 19 00:14:45.809203 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 00:14:50.097405 sshd[6139]: Connection closed by 147.75.109.163 port 46634 Aug 19 00:14:50.099023 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:50.111972 systemd[1]: sshd@19-172.31.30.10:22-147.75.109.163:46634.service: Deactivated successfully. Aug 19 00:14:50.120829 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 00:14:50.122324 systemd[1]: session-20.scope: Consumed 1.179s CPU time, 70.6M memory peak. Aug 19 00:14:50.125339 systemd-logind[1986]: Session 20 logged out. Waiting for processes to exit. Aug 19 00:14:50.164835 systemd[1]: Started sshd@20-172.31.30.10:22-147.75.109.163:45100.service - OpenSSH per-connection server daemon (147.75.109.163:45100). Aug 19 00:14:50.169171 systemd-logind[1986]: Removed session 20. Aug 19 00:14:50.388335 sshd[6159]: Accepted publickey for core from 147.75.109.163 port 45100 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:50.393040 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:50.403596 systemd-logind[1986]: New session 21 of user core. Aug 19 00:14:50.413057 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 00:14:50.860717 containerd[2014]: time="2025-08-19T00:14:50.856897710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"8ec677ece38d685c9207469a1e497dd786adad9e3e9c4a906bac5e78be313c6a\" pid:6180 exited_at:{seconds:1755562490 nanos:856300590}" Aug 19 00:14:51.093512 sshd[6162]: Connection closed by 147.75.109.163 port 45100 Aug 19 00:14:51.095872 sshd-session[6159]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:51.104786 systemd-logind[1986]: Session 21 logged out. Waiting for processes to exit. Aug 19 00:14:51.105984 systemd[1]: sshd@20-172.31.30.10:22-147.75.109.163:45100.service: Deactivated successfully. Aug 19 00:14:51.111850 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 00:14:51.135028 systemd-logind[1986]: Removed session 21. Aug 19 00:14:51.139052 systemd[1]: Started sshd@21-172.31.30.10:22-147.75.109.163:45112.service - OpenSSH per-connection server daemon (147.75.109.163:45112). Aug 19 00:14:51.341339 sshd[6192]: Accepted publickey for core from 147.75.109.163 port 45112 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:51.346178 sshd-session[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:51.360719 systemd-logind[1986]: New session 22 of user core. Aug 19 00:14:51.370716 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 00:14:51.766931 sshd[6195]: Connection closed by 147.75.109.163 port 45112 Aug 19 00:14:51.766710 sshd-session[6192]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:51.779777 systemd-logind[1986]: Session 22 logged out. Waiting for processes to exit. Aug 19 00:14:51.782034 systemd[1]: sshd@21-172.31.30.10:22-147.75.109.163:45112.service: Deactivated successfully. Aug 19 00:14:51.790876 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 00:14:51.798729 systemd-logind[1986]: Removed session 22. Aug 19 00:14:56.812771 systemd[1]: Started sshd@22-172.31.30.10:22-147.75.109.163:45118.service - OpenSSH per-connection server daemon (147.75.109.163:45118). Aug 19 00:14:57.023128 sshd[6208]: Accepted publickey for core from 147.75.109.163 port 45118 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:57.025714 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:57.037497 systemd-logind[1986]: New session 23 of user core. Aug 19 00:14:57.044726 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 00:14:57.342531 sshd[6214]: Connection closed by 147.75.109.163 port 45118 Aug 19 00:14:57.342061 sshd-session[6208]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:57.351322 systemd[1]: sshd@22-172.31.30.10:22-147.75.109.163:45118.service: Deactivated successfully. Aug 19 00:14:57.358187 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 00:14:57.363601 systemd-logind[1986]: Session 23 logged out. Waiting for processes to exit. Aug 19 00:14:57.368339 systemd-logind[1986]: Removed session 23. Aug 19 00:15:02.381823 systemd[1]: Started sshd@23-172.31.30.10:22-147.75.109.163:55248.service - OpenSSH per-connection server daemon (147.75.109.163:55248). Aug 19 00:15:02.580314 sshd[6228]: Accepted publickey for core from 147.75.109.163 port 55248 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:15:02.582131 sshd-session[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:02.591069 systemd-logind[1986]: New session 24 of user core. Aug 19 00:15:02.599670 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 00:15:02.894426 sshd[6231]: Connection closed by 147.75.109.163 port 55248 Aug 19 00:15:02.893350 sshd-session[6228]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:02.903593 systemd[1]: sshd@23-172.31.30.10:22-147.75.109.163:55248.service: Deactivated successfully. Aug 19 00:15:02.911445 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 00:15:02.915614 systemd-logind[1986]: Session 24 logged out. Waiting for processes to exit. Aug 19 00:15:02.919727 systemd-logind[1986]: Removed session 24. Aug 19 00:15:05.067800 containerd[2014]: time="2025-08-19T00:15:05.067161364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" id:\"92bff8ec441f3d5fec900e8692e90b3eb5459a4c509241213799efe819d710c2\" pid:6254 exited_at:{seconds:1755562505 nanos:66761080}" Aug 19 00:15:07.931747 systemd[1]: Started sshd@24-172.31.30.10:22-147.75.109.163:55264.service - OpenSSH per-connection server daemon (147.75.109.163:55264). Aug 19 00:15:08.143285 sshd[6269]: Accepted publickey for core from 147.75.109.163 port 55264 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:15:08.146536 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:08.158703 systemd-logind[1986]: New session 25 of user core. Aug 19 00:15:08.164701 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 00:15:08.501803 sshd[6272]: Connection closed by 147.75.109.163 port 55264 Aug 19 00:15:08.505021 sshd-session[6269]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:08.516137 systemd-logind[1986]: Session 25 logged out. Waiting for processes to exit. Aug 19 00:15:08.519325 systemd[1]: sshd@24-172.31.30.10:22-147.75.109.163:55264.service: Deactivated successfully. Aug 19 00:15:08.528557 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 00:15:08.534657 systemd-logind[1986]: Removed session 25. Aug 19 00:15:10.921016 containerd[2014]: time="2025-08-19T00:15:10.920663545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"7ac89729008935ce8cfe840446bf6d76d0d9c559b5ddcd0bafef241f013898d6\" pid:6296 exited_at:{seconds:1755562510 nanos:920111485}" Aug 19 00:15:11.037410 containerd[2014]: time="2025-08-19T00:15:11.036448318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"743dc334ee30ad14acd630ac83aaf2e26aeebb3f0aa3c54bb2c76b17b8e78c7c\" pid:6316 exited_at:{seconds:1755562511 nanos:36022726}" Aug 19 00:15:13.544620 systemd[1]: Started sshd@25-172.31.30.10:22-147.75.109.163:45874.service - OpenSSH per-connection server daemon (147.75.109.163:45874). Aug 19 00:15:13.749514 sshd[6331]: Accepted publickey for core from 147.75.109.163 port 45874 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:15:13.753098 sshd-session[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:13.767354 systemd-logind[1986]: New session 26 of user core. Aug 19 00:15:13.774010 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 00:15:14.077297 sshd[6334]: Connection closed by 147.75.109.163 port 45874 Aug 19 00:15:14.077962 sshd-session[6331]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:14.088885 systemd[1]: sshd@25-172.31.30.10:22-147.75.109.163:45874.service: Deactivated successfully. Aug 19 00:15:14.094199 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 00:15:14.096713 systemd-logind[1986]: Session 26 logged out. Waiting for processes to exit. Aug 19 00:15:14.101526 systemd-logind[1986]: Removed session 26. Aug 19 00:15:19.124498 systemd[1]: Started sshd@26-172.31.30.10:22-147.75.109.163:36748.service - OpenSSH per-connection server daemon (147.75.109.163:36748). Aug 19 00:15:19.331935 sshd[6355]: Accepted publickey for core from 147.75.109.163 port 36748 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:15:19.335267 sshd-session[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:19.345997 systemd-logind[1986]: New session 27 of user core. Aug 19 00:15:19.353998 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 00:15:19.636025 sshd[6358]: Connection closed by 147.75.109.163 port 36748 Aug 19 00:15:19.637297 sshd-session[6355]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:19.645027 systemd[1]: sshd@26-172.31.30.10:22-147.75.109.163:36748.service: Deactivated successfully. Aug 19 00:15:19.653032 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 00:15:19.659285 systemd-logind[1986]: Session 27 logged out. Waiting for processes to exit. Aug 19 00:15:19.664179 systemd-logind[1986]: Removed session 27. Aug 19 00:15:24.678697 systemd[1]: Started sshd@27-172.31.30.10:22-147.75.109.163:36750.service - OpenSSH per-connection server daemon (147.75.109.163:36750). Aug 19 00:15:24.893605 sshd[6371]: Accepted publickey for core from 147.75.109.163 port 36750 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:15:24.896634 sshd-session[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:24.905863 systemd-logind[1986]: New session 28 of user core. Aug 19 00:15:24.912642 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 19 00:15:25.223610 sshd[6374]: Connection closed by 147.75.109.163 port 36750 Aug 19 00:15:25.225152 sshd-session[6371]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:25.233753 systemd[1]: sshd@27-172.31.30.10:22-147.75.109.163:36750.service: Deactivated successfully. Aug 19 00:15:25.241793 systemd[1]: session-28.scope: Deactivated successfully. Aug 19 00:15:25.245831 systemd-logind[1986]: Session 28 logged out. Waiting for processes to exit. Aug 19 00:15:25.250912 systemd-logind[1986]: Removed session 28. Aug 19 00:15:34.681417 containerd[2014]: time="2025-08-19T00:15:34.681326387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31d3495aedbecf436aecc51a613907be93946c1e4bd9eb9eafefb720888f66f\" id:\"3258ef89db9bfa383678c0c16b2618cbd355cfa46816b96a1581b4fc3c8fe865\" pid:6417 exited_at:{seconds:1755562534 nanos:680768459}" Aug 19 00:15:39.603042 systemd[1]: cri-containerd-5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620.scope: Deactivated successfully. Aug 19 00:15:39.603760 systemd[1]: cri-containerd-5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620.scope: Consumed 5.903s CPU time, 59.5M memory peak, 64K read from disk. Aug 19 00:15:39.612605 containerd[2014]: time="2025-08-19T00:15:39.612439624Z" level=info msg="received exit event container_id:\"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\" id:\"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\" pid:3191 exit_status:1 exited_at:{seconds:1755562539 nanos:611833720}" Aug 19 00:15:39.612605 containerd[2014]: time="2025-08-19T00:15:39.612553012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\" id:\"5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620\" pid:3191 exit_status:1 exited_at:{seconds:1755562539 nanos:611833720}" Aug 19 00:15:39.658901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620-rootfs.mount: Deactivated successfully. Aug 19 00:15:39.814818 kubelet[3438]: I0819 00:15:39.814738 3438 scope.go:117] "RemoveContainer" containerID="5a126ed5bf61e3e33b42f371bcf7ea040b9b9eb17eb324861cf877093d850620" Aug 19 00:15:39.819360 containerd[2014]: time="2025-08-19T00:15:39.819276413Z" level=info msg="CreateContainer within sandbox \"cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 19 00:15:39.839483 containerd[2014]: time="2025-08-19T00:15:39.839411213Z" level=info msg="Container 178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:39.862417 containerd[2014]: time="2025-08-19T00:15:39.862138985Z" level=info msg="CreateContainer within sandbox \"cf108861798fde2183245d08446ee467308e2dc532a2b7321bdfdfa6fa5432f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13\"" Aug 19 00:15:39.863477 containerd[2014]: time="2025-08-19T00:15:39.863400185Z" level=info msg="StartContainer for \"178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13\"" Aug 19 00:15:39.865892 containerd[2014]: time="2025-08-19T00:15:39.865811933Z" level=info msg="connecting to shim 178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13" address="unix:///run/containerd/s/1f9043574ed62f612ace760f655f26be70c5d9a2fb7bce2a3116b519192e27e6" protocol=ttrpc version=3 Aug 19 00:15:39.906697 systemd[1]: Started cri-containerd-178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13.scope - libcontainer container 178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13. Aug 19 00:15:39.988880 containerd[2014]: time="2025-08-19T00:15:39.988816362Z" level=info msg="StartContainer for \"178a8299f9acc327c412ea5da1579ff001108cf1e7e1c957d809d6685c109c13\" returns successfully" Aug 19 00:15:40.343073 containerd[2014]: time="2025-08-19T00:15:40.342943732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"314d82d15a7d894564d2f650d470c1dce42d8a2369d438a74986b607fe6dc214\" pid:6481 exited_at:{seconds:1755562540 nanos:340272352}" Aug 19 00:15:40.912580 systemd[1]: cri-containerd-3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d.scope: Deactivated successfully. Aug 19 00:15:40.913707 systemd[1]: cri-containerd-3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d.scope: Consumed 29.341s CPU time, 113.1M memory peak, 152K read from disk. Aug 19 00:15:40.922587 containerd[2014]: time="2025-08-19T00:15:40.922421886Z" level=info msg="received exit event container_id:\"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\" id:\"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\" pid:3757 exit_status:1 exited_at:{seconds:1755562540 nanos:921548166}" Aug 19 00:15:40.923142 containerd[2014]: time="2025-08-19T00:15:40.922726626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\" id:\"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\" pid:3757 exit_status:1 exited_at:{seconds:1755562540 nanos:921548166}" Aug 19 00:15:40.972639 containerd[2014]: time="2025-08-19T00:15:40.972576043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"881169bf1588663d9555ebad6e068c68ac2a36aa3aa17d24e39bc0f4577b4910\" pid:6505 exited_at:{seconds:1755562540 nanos:971635459}" Aug 19 00:15:41.006078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d-rootfs.mount: Deactivated successfully. Aug 19 00:15:41.086159 containerd[2014]: time="2025-08-19T00:15:41.085978203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69b4af60e960e12e2940bc31fe6b416992a0ce4d07e35861365c32ddd62be85\" id:\"2fdbe9edfa921911cf17afd957f9baf5e4b66941d70f61ad7d6ad5990ba70243\" pid:6525 exited_at:{seconds:1755562541 nanos:85233579}" Aug 19 00:15:41.835816 kubelet[3438]: I0819 00:15:41.835769 3438 scope.go:117] "RemoveContainer" containerID="3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d" Aug 19 00:15:41.840472 containerd[2014]: time="2025-08-19T00:15:41.840347551Z" level=info msg="CreateContainer within sandbox \"51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 19 00:15:41.859601 containerd[2014]: time="2025-08-19T00:15:41.859534891Z" level=info msg="Container 60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:41.890406 containerd[2014]: time="2025-08-19T00:15:41.889679887Z" level=info msg="CreateContainer within sandbox \"51e38c3cf5fccbfff48f2fdf162a6727a133f5afc4cf619cede7c55282176058\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\"" Aug 19 00:15:41.891359 containerd[2014]: time="2025-08-19T00:15:41.891295411Z" level=info msg="StartContainer for \"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\"" Aug 19 00:15:41.893277 containerd[2014]: time="2025-08-19T00:15:41.893200879Z" level=info msg="connecting to shim 60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797" address="unix:///run/containerd/s/dab534132c5b739293446850815fe905c73606a641caa5351f18434e552b6ac3" protocol=ttrpc version=3 Aug 19 00:15:41.939697 systemd[1]: Started cri-containerd-60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797.scope - libcontainer container 60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797. Aug 19 00:15:41.996403 containerd[2014]: time="2025-08-19T00:15:41.996298508Z" level=info msg="StartContainer for \"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\" returns successfully" Aug 19 00:15:45.455786 systemd[1]: cri-containerd-f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d.scope: Deactivated successfully. Aug 19 00:15:45.457735 systemd[1]: cri-containerd-f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d.scope: Consumed 3.171s CPU time, 20.8M memory peak, 288K read from disk. Aug 19 00:15:45.460786 containerd[2014]: time="2025-08-19T00:15:45.460737009Z" level=info msg="received exit event container_id:\"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\" id:\"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\" pid:3180 exit_status:1 exited_at:{seconds:1755562545 nanos:460089297}" Aug 19 00:15:45.462274 containerd[2014]: time="2025-08-19T00:15:45.462222261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\" id:\"f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d\" pid:3180 exit_status:1 exited_at:{seconds:1755562545 nanos:460089297}" Aug 19 00:15:45.506505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d-rootfs.mount: Deactivated successfully. Aug 19 00:15:45.859663 kubelet[3438]: I0819 00:15:45.858911 3438 scope.go:117] "RemoveContainer" containerID="f4fe70e4d86024feba9d5c3595e56e995f4b14732d50fec051c58c69d553270d" Aug 19 00:15:45.863201 containerd[2014]: time="2025-08-19T00:15:45.863123903Z" level=info msg="CreateContainer within sandbox \"6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 19 00:15:45.888416 containerd[2014]: time="2025-08-19T00:15:45.887569403Z" level=info msg="Container 98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:45.889546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326197825.mount: Deactivated successfully. Aug 19 00:15:45.905783 containerd[2014]: time="2025-08-19T00:15:45.905732891Z" level=info msg="CreateContainer within sandbox \"6454cfb6bb1fea6978d46cf19ab86c89a37578b15647c56101b16e0b6faf0d9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6\"" Aug 19 00:15:45.906921 containerd[2014]: time="2025-08-19T00:15:45.906865583Z" level=info msg="StartContainer for \"98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6\"" Aug 19 00:15:45.909718 containerd[2014]: time="2025-08-19T00:15:45.909647639Z" level=info msg="connecting to shim 98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6" address="unix:///run/containerd/s/1c97eb2a32f82cb40b7c0b5d3745392b332c7d2e4ca590fa24d592b78d858db4" protocol=ttrpc version=3 Aug 19 00:15:45.949671 systemd[1]: Started cri-containerd-98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6.scope - libcontainer container 98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6. Aug 19 00:15:46.033024 containerd[2014]: time="2025-08-19T00:15:46.032961692Z" level=info msg="StartContainer for \"98740faf78a7e9e2686a9de967068e2fa7eafff8ecdf0f5bc8efb9fb777388d6\" returns successfully" Aug 19 00:15:48.625245 kubelet[3438]: E0819 00:15:48.625158 3438 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 19 00:15:50.697973 containerd[2014]: time="2025-08-19T00:15:50.697887195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbd2fe81971b866618a0c896636eedbd02a8e062ebedd2d67a0087842e5cbcf\" id:\"617ca521e3755a4a580f2366004983bd459a5d06de9c97cf41bfdc53c1b17484\" pid:6639 exit_status:1 exited_at:{seconds:1755562550 nanos:697423107}" Aug 19 00:15:53.379343 systemd[1]: cri-containerd-60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797.scope: Deactivated successfully. Aug 19 00:15:53.379927 systemd[1]: cri-containerd-60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797.scope: Consumed 434ms CPU time, 39.9M memory peak, 1.1M read from disk. Aug 19 00:15:53.384343 containerd[2014]: time="2025-08-19T00:15:53.383357824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\" id:\"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\" pid:6565 exit_status:1 exited_at:{seconds:1755562553 nanos:382135420}" Aug 19 00:15:53.385969 containerd[2014]: time="2025-08-19T00:15:53.383359048Z" level=info msg="received exit event container_id:\"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\" id:\"60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797\" pid:6565 exit_status:1 exited_at:{seconds:1755562553 nanos:382135420}" Aug 19 00:15:53.422099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797-rootfs.mount: Deactivated successfully. Aug 19 00:15:53.897308 kubelet[3438]: I0819 00:15:53.897155 3438 scope.go:117] "RemoveContainer" containerID="3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d" Aug 19 00:15:53.898562 kubelet[3438]: I0819 00:15:53.898524 3438 scope.go:117] "RemoveContainer" containerID="60bba8b00eda976046d7a98a0e0abd715998feecfaac820737739529e165f797" Aug 19 00:15:53.902348 kubelet[3438]: E0819 00:15:53.902201 3438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-5vrvm_tigera-operator(9aa23eb4-974b-4bc2-a216-648d1fcd05d2)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vrvm" podUID="9aa23eb4-974b-4bc2-a216-648d1fcd05d2" Aug 19 00:15:53.902785 containerd[2014]: time="2025-08-19T00:15:53.902559163Z" level=info msg="RemoveContainer for \"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\"" Aug 19 00:15:53.914409 containerd[2014]: time="2025-08-19T00:15:53.914272735Z" level=info msg="RemoveContainer for \"3ea1fea89cb2af2d19161059bfc52877063e7e19bba59163780e684b82496e9d\" returns successfully" Aug 19 00:15:58.626859 kubelet[3438]: E0819 00:15:58.626791 3438 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"