Jul 15 23:12:15.106396 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 15 23:12:15.106449 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:12:15.106474 kernel: KASLR disabled due to lack of seed Jul 15 23:12:15.106491 kernel: efi: EFI v2.7 by EDK II Jul 15 23:12:15.106506 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 15 23:12:15.106521 kernel: secureboot: Secure boot disabled Jul 15 23:12:15.106539 kernel: ACPI: Early table checksum verification disabled Jul 15 23:12:15.106554 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 15 23:12:15.106569 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 15 23:12:15.106584 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 15 23:12:15.106599 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 15 23:12:15.106619 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 15 23:12:15.106633 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 15 23:12:15.106649 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 15 23:12:15.106666 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 15 23:12:15.106682 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 15 23:12:15.106701 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 15 23:12:15.106718 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 15 23:12:15.106734 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 15 23:12:15.106750 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 15 23:12:15.106766 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 15 23:12:15.106782 kernel: printk: legacy bootconsole [uart0] enabled Jul 15 23:12:15.106799 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:12:15.106815 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 23:12:15.106831 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Jul 15 23:12:15.106847 kernel: Zone ranges: Jul 15 23:12:15.106863 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 15 23:12:15.106882 kernel: DMA32 empty Jul 15 23:12:15.106898 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 15 23:12:15.106913 kernel: Device empty Jul 15 23:12:15.106929 kernel: Movable zone start for each node Jul 15 23:12:15.106946 kernel: Early memory node ranges Jul 15 23:12:15.106961 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 15 23:12:15.106977 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 15 23:12:15.106992 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 15 23:12:15.107008 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 15 23:12:15.107023 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 15 23:12:15.107039 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 15 23:12:15.107054 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 15 23:12:15.107075 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 15 23:12:15.107097 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 15 23:12:15.107114 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 15 23:12:15.107131 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jul 15 23:12:15.107147 kernel: psci: probing for conduit method from ACPI. Jul 15 23:12:15.107167 kernel: psci: PSCIv1.0 detected in firmware. Jul 15 23:12:15.107184 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:12:15.107200 kernel: psci: Trusted OS migration not required Jul 15 23:12:15.107216 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:12:15.107232 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 15 23:12:15.107249 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:12:15.107293 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:12:15.107315 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 15 23:12:15.107332 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:12:15.107348 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:12:15.107365 kernel: CPU features: detected: Spectre-v2 Jul 15 23:12:15.107387 kernel: CPU features: detected: Spectre-v3a Jul 15 23:12:15.107404 kernel: CPU features: detected: Spectre-BHB Jul 15 23:12:15.107420 kernel: CPU features: detected: ARM erratum 1742098 Jul 15 23:12:15.107436 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 15 23:12:15.107452 kernel: alternatives: applying boot alternatives Jul 15 23:12:15.107471 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:12:15.107489 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:12:15.107505 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:12:15.107521 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:12:15.107537 kernel: Fallback order for Node 0: 0 Jul 15 23:12:15.107557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 15 23:12:15.107574 kernel: Policy zone: Normal Jul 15 23:12:15.107590 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:12:15.107608 kernel: software IO TLB: area num 2. Jul 15 23:12:15.107626 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jul 15 23:12:15.107643 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:12:15.107659 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:12:15.107677 kernel: rcu: RCU event tracing is enabled. Jul 15 23:12:15.107694 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:12:15.107710 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:12:15.107727 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:12:15.107743 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:12:15.107764 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:12:15.107780 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:12:15.107797 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:12:15.107813 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:12:15.107829 kernel: GICv3: 96 SPIs implemented Jul 15 23:12:15.107845 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:12:15.107861 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:12:15.107878 kernel: GICv3: GICv3 features: 16 PPIs Jul 15 23:12:15.107894 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:12:15.107910 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 15 23:12:15.107926 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 15 23:12:15.107942 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:12:15.107964 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:12:15.107980 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 15 23:12:15.107996 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 15 23:12:15.108012 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 15 23:12:15.108029 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:12:15.108045 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 15 23:12:15.108061 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 15 23:12:15.108077 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 15 23:12:15.108094 kernel: Console: colour dummy device 80x25 Jul 15 23:12:15.108111 kernel: printk: legacy console [tty1] enabled Jul 15 23:12:15.108127 kernel: ACPI: Core revision 20240827 Jul 15 23:12:15.108170 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 15 23:12:15.108188 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:12:15.108205 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:12:15.108222 kernel: landlock: Up and running. Jul 15 23:12:15.108238 kernel: SELinux: Initializing. Jul 15 23:12:15.108255 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:12:15.110372 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:12:15.110400 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:12:15.110419 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:12:15.110445 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:12:15.110462 kernel: Remapping and enabling EFI services. Jul 15 23:12:15.110479 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:12:15.110496 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:12:15.110513 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 15 23:12:15.110530 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 15 23:12:15.110548 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 15 23:12:15.110565 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:12:15.110581 kernel: SMP: Total of 2 processors activated. Jul 15 23:12:15.110612 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:12:15.110630 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:12:15.110651 kernel: CPU features: detected: 32-bit EL1 Support Jul 15 23:12:15.110670 kernel: CPU features: detected: CRC32 instructions Jul 15 23:12:15.110687 kernel: alternatives: applying system-wide alternatives Jul 15 23:12:15.110705 kernel: Memory: 3796516K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 212600K reserved, 16384K cma-reserved) Jul 15 23:12:15.110723 kernel: devtmpfs: initialized Jul 15 23:12:15.110745 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:12:15.110763 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:12:15.110781 kernel: 16912 pages in range for non-PLT usage Jul 15 23:12:15.110799 kernel: 508432 pages in range for PLT usage Jul 15 23:12:15.110816 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:12:15.110833 kernel: SMBIOS 3.0.0 present. Jul 15 23:12:15.110850 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 15 23:12:15.110868 kernel: DMI: Memory slots populated: 0/0 Jul 15 23:12:15.110886 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:12:15.110908 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:12:15.110926 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:12:15.110944 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:12:15.110961 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:12:15.110979 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jul 15 23:12:15.110996 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:12:15.111014 kernel: cpuidle: using governor menu Jul 15 23:12:15.111032 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:12:15.111049 kernel: ASID allocator initialised with 65536 entries Jul 15 23:12:15.111071 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:12:15.111089 kernel: Serial: AMBA PL011 UART driver Jul 15 23:12:15.111107 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:12:15.111124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:12:15.111142 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:12:15.111159 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:12:15.111176 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:12:15.111194 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:12:15.111211 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:12:15.111233 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:12:15.111250 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:12:15.111290 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:12:15.111312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:12:15.111335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:12:15.111353 kernel: ACPI: Interpreter enabled Jul 15 23:12:15.111370 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:12:15.111387 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:12:15.111405 kernel: ACPI: CPU0 has been hot-added Jul 15 23:12:15.111428 kernel: ACPI: CPU1 has been hot-added Jul 15 23:12:15.111446 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 15 23:12:15.111747 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:12:15.111935 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:12:15.112119 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:12:15.115455 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 15 23:12:15.115676 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 15 23:12:15.115710 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 15 23:12:15.115731 kernel: acpiphp: Slot [1] registered Jul 15 23:12:15.115749 kernel: acpiphp: Slot [2] registered Jul 15 23:12:15.115766 kernel: acpiphp: Slot [3] registered Jul 15 23:12:15.115784 kernel: acpiphp: Slot [4] registered Jul 15 23:12:15.115802 kernel: acpiphp: Slot [5] registered Jul 15 23:12:15.115819 kernel: acpiphp: Slot [6] registered Jul 15 23:12:15.115836 kernel: acpiphp: Slot [7] registered Jul 15 23:12:15.115853 kernel: acpiphp: Slot [8] registered Jul 15 23:12:15.115871 kernel: acpiphp: Slot [9] registered Jul 15 23:12:15.115892 kernel: acpiphp: Slot [10] registered Jul 15 23:12:15.115910 kernel: acpiphp: Slot [11] registered Jul 15 23:12:15.115927 kernel: acpiphp: Slot [12] registered Jul 15 23:12:15.115945 kernel: acpiphp: Slot [13] registered Jul 15 23:12:15.115962 kernel: acpiphp: Slot [14] registered Jul 15 23:12:15.115979 kernel: acpiphp: Slot [15] registered Jul 15 23:12:15.115996 kernel: acpiphp: Slot [16] registered Jul 15 23:12:15.116014 kernel: acpiphp: Slot [17] registered Jul 15 23:12:15.116031 kernel: acpiphp: Slot [18] registered Jul 15 23:12:15.116052 kernel: acpiphp: Slot [19] registered Jul 15 23:12:15.116070 kernel: acpiphp: Slot [20] registered Jul 15 23:12:15.116087 kernel: acpiphp: Slot [21] registered Jul 15 23:12:15.116105 kernel: acpiphp: Slot [22] registered Jul 15 23:12:15.116123 kernel: acpiphp: Slot [23] registered Jul 15 23:12:15.116163 kernel: acpiphp: Slot [24] registered Jul 15 23:12:15.116184 kernel: acpiphp: Slot [25] registered Jul 15 23:12:15.116202 kernel: acpiphp: Slot [26] registered Jul 15 23:12:15.116219 kernel: acpiphp: Slot [27] registered Jul 15 23:12:15.116237 kernel: acpiphp: Slot [28] registered Jul 15 23:12:15.116260 kernel: acpiphp: Slot [29] registered Jul 15 23:12:15.116418 kernel: acpiphp: Slot [30] registered Jul 15 23:12:15.116436 kernel: acpiphp: Slot [31] registered Jul 15 23:12:15.116453 kernel: PCI host bridge to bus 0000:00 Jul 15 23:12:15.116688 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 15 23:12:15.116864 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:12:15.117036 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 15 23:12:15.117211 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 15 23:12:15.118547 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:12:15.118787 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 15 23:12:15.118981 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 15 23:12:15.119183 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 15 23:12:15.120535 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 15 23:12:15.120755 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 23:12:15.120969 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 15 23:12:15.121159 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 15 23:12:15.123030 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 15 23:12:15.123251 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 15 23:12:15.123528 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 15 23:12:15.123724 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 15 23:12:15.123913 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 15 23:12:15.124114 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 15 23:12:15.124349 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 15 23:12:15.124546 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 15 23:12:15.124722 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 15 23:12:15.124889 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:12:15.125061 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 15 23:12:15.125092 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:12:15.125111 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:12:15.125131 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:12:15.125149 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:12:15.125166 kernel: iommu: Default domain type: Translated Jul 15 23:12:15.125184 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:12:15.125202 kernel: efivars: Registered efivars operations Jul 15 23:12:15.125219 kernel: vgaarb: loaded Jul 15 23:12:15.125237 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:12:15.125255 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:12:15.125408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:12:15.125427 kernel: pnp: PnP ACPI init Jul 15 23:12:15.125638 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 15 23:12:15.125664 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:12:15.125682 kernel: NET: Registered PF_INET protocol family Jul 15 23:12:15.125700 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:12:15.125718 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:12:15.125735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:12:15.125759 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:12:15.125777 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:12:15.125795 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:12:15.125812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:12:15.125830 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:12:15.125847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:12:15.125865 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:12:15.125882 kernel: kvm [1]: HYP mode not available Jul 15 23:12:15.125899 kernel: Initialise system trusted keyrings Jul 15 23:12:15.125921 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:12:15.125939 kernel: Key type asymmetric registered Jul 15 23:12:15.125956 kernel: Asymmetric key parser 'x509' registered Jul 15 23:12:15.125973 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:12:15.125991 kernel: io scheduler mq-deadline registered Jul 15 23:12:15.126008 kernel: io scheduler kyber registered Jul 15 23:12:15.126025 kernel: io scheduler bfq registered Jul 15 23:12:15.126238 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 15 23:12:15.126336 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:12:15.126358 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:12:15.126377 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 15 23:12:15.126395 kernel: ACPI: button: Sleep Button [SLPB] Jul 15 23:12:15.126413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:12:15.126432 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 15 23:12:15.126638 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 15 23:12:15.126664 kernel: printk: legacy console [ttyS0] disabled Jul 15 23:12:15.126682 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 15 23:12:15.126706 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:12:15.126724 kernel: printk: legacy bootconsole [uart0] disabled Jul 15 23:12:15.126741 kernel: thunder_xcv, ver 1.0 Jul 15 23:12:15.126759 kernel: thunder_bgx, ver 1.0 Jul 15 23:12:15.126776 kernel: nicpf, ver 1.0 Jul 15 23:12:15.126793 kernel: nicvf, ver 1.0 Jul 15 23:12:15.126996 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:12:15.127172 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:12:14 UTC (1752621134) Jul 15 23:12:15.127200 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:12:15.127219 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 15 23:12:15.127236 kernel: watchdog: NMI not fully supported Jul 15 23:12:15.127254 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:12:15.127294 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:12:15.127341 kernel: Segment Routing with IPv6 Jul 15 23:12:15.127360 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:12:15.127378 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:12:15.127395 kernel: Key type dns_resolver registered Jul 15 23:12:15.127419 kernel: registered taskstats version 1 Jul 15 23:12:15.127437 kernel: Loading compiled-in X.509 certificates Jul 15 23:12:15.127455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:12:15.127473 kernel: Demotion targets for Node 0: null Jul 15 23:12:15.127490 kernel: Key type .fscrypt registered Jul 15 23:12:15.127507 kernel: Key type fscrypt-provisioning registered Jul 15 23:12:15.127524 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:12:15.127542 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:12:15.127559 kernel: ima: No architecture policies found Jul 15 23:12:15.127581 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:12:15.127599 kernel: clk: Disabling unused clocks Jul 15 23:12:15.127617 kernel: PM: genpd: Disabling unused power domains Jul 15 23:12:15.127634 kernel: Warning: unable to open an initial console. Jul 15 23:12:15.127652 kernel: Freeing unused kernel memory: 39488K Jul 15 23:12:15.127669 kernel: Run /init as init process Jul 15 23:12:15.127686 kernel: with arguments: Jul 15 23:12:15.127703 kernel: /init Jul 15 23:12:15.127720 kernel: with environment: Jul 15 23:12:15.127737 kernel: HOME=/ Jul 15 23:12:15.127758 kernel: TERM=linux Jul 15 23:12:15.127776 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:12:15.127795 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:12:15.127819 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:12:15.127840 systemd[1]: Detected virtualization amazon. Jul 15 23:12:15.127859 systemd[1]: Detected architecture arm64. Jul 15 23:12:15.127877 systemd[1]: Running in initrd. Jul 15 23:12:15.127899 systemd[1]: No hostname configured, using default hostname. Jul 15 23:12:15.127919 systemd[1]: Hostname set to . Jul 15 23:12:15.127938 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:12:15.127957 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:12:15.127976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:12:15.127995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:12:15.128015 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:12:15.128034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:12:15.128059 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:12:15.128079 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:12:15.128101 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:12:15.128121 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:12:15.128161 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:12:15.128184 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:12:15.128203 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:12:15.128229 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:12:15.128248 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:12:15.129163 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:12:15.129198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:12:15.129220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:12:15.129241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:12:15.129280 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:12:15.129328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:12:15.129357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:12:15.129377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:12:15.129396 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:12:15.129416 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:12:15.129436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:12:15.129456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:12:15.129478 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:12:15.129500 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:12:15.129520 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:12:15.129546 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:12:15.129566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:12:15.129585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:12:15.129606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:12:15.129629 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:12:15.129694 systemd-journald[258]: Collecting audit messages is disabled. Jul 15 23:12:15.129737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:12:15.129757 systemd-journald[258]: Journal started Jul 15 23:12:15.129800 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2183ce73b9bf96e87bcaf27f05dcb8) is 8M, max 75.3M, 67.3M free. Jul 15 23:12:15.082588 systemd-modules-load[260]: Inserted module 'overlay' Jul 15 23:12:15.136338 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:12:15.136378 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:12:15.146123 kernel: Bridge firewalling registered Jul 15 23:12:15.145226 systemd-modules-load[260]: Inserted module 'br_netfilter' Jul 15 23:12:15.148479 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:12:15.154861 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:12:15.161402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:12:15.167798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:12:15.175726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:12:15.188292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:12:15.208551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:12:15.225362 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:12:15.243056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:12:15.249070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:12:15.257690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:12:15.270325 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:12:15.276665 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:12:15.285404 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:12:15.322137 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:12:15.375386 systemd-resolved[296]: Positive Trust Anchors: Jul 15 23:12:15.375413 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:12:15.375476 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:12:15.474309 kernel: SCSI subsystem initialized Jul 15 23:12:15.482299 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:12:15.494313 kernel: iscsi: registered transport (tcp) Jul 15 23:12:15.516482 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:12:15.516578 kernel: QLogic iSCSI HBA Driver Jul 15 23:12:15.550441 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:12:15.581799 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:12:15.587953 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:12:15.641627 kernel: random: crng init done Jul 15 23:12:15.641836 systemd-resolved[296]: Defaulting to hostname 'linux'. Jul 15 23:12:15.645409 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:12:15.650497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:12:15.684318 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:12:15.692147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:12:15.773324 kernel: raid6: neonx8 gen() 6502 MB/s Jul 15 23:12:15.790299 kernel: raid6: neonx4 gen() 6543 MB/s Jul 15 23:12:15.807297 kernel: raid6: neonx2 gen() 5444 MB/s Jul 15 23:12:15.824298 kernel: raid6: neonx1 gen() 3959 MB/s Jul 15 23:12:15.841297 kernel: raid6: int64x8 gen() 3657 MB/s Jul 15 23:12:15.858298 kernel: raid6: int64x4 gen() 3707 MB/s Jul 15 23:12:15.875297 kernel: raid6: int64x2 gen() 3600 MB/s Jul 15 23:12:15.893259 kernel: raid6: int64x1 gen() 2767 MB/s Jul 15 23:12:15.893315 kernel: raid6: using algorithm neonx4 gen() 6543 MB/s Jul 15 23:12:15.911278 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Jul 15 23:12:15.911317 kernel: raid6: using neon recovery algorithm Jul 15 23:12:15.919800 kernel: xor: measuring software checksum speed Jul 15 23:12:15.919851 kernel: 8regs : 12931 MB/sec Jul 15 23:12:15.920987 kernel: 32regs : 13039 MB/sec Jul 15 23:12:15.923316 kernel: arm64_neon : 8693 MB/sec Jul 15 23:12:15.923348 kernel: xor: using function: 32regs (13039 MB/sec) Jul 15 23:12:16.015323 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:12:16.026730 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:12:16.033743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:12:16.083614 systemd-udevd[508]: Using default interface naming scheme 'v255'. Jul 15 23:12:16.095612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:12:16.103431 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:12:16.147706 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Jul 15 23:12:16.191169 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:12:16.196393 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:12:16.330547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:12:16.344771 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:12:16.487291 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 15 23:12:16.487363 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 15 23:12:16.498328 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 15 23:12:16.508513 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:12:16.508575 kernel: GPT:9289727 != 16777215 Jul 15 23:12:16.508609 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:12:16.510323 kernel: GPT:9289727 != 16777215 Jul 15 23:12:16.512054 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:12:16.513065 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:12:16.524570 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:12:16.524634 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 15 23:12:16.526539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:12:16.529027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:12:16.536434 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 15 23:12:16.538677 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 15 23:12:16.539218 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:12:16.540715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:12:16.551558 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:12:16.565300 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:53:d2:fe:b3:2d Jul 15 23:12:16.567416 (udev-worker)[550]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:12:16.592294 kernel: nvme nvme0: using unchecked data buffer Jul 15 23:12:16.611127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:12:16.717176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 15 23:12:16.759508 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 15 23:12:16.766410 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 15 23:12:16.814316 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:12:16.840243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 15 23:12:16.867179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 23:12:16.872651 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:12:16.875372 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:12:16.883913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:12:16.887801 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:12:16.900971 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:12:16.917011 disk-uuid[685]: Primary Header is updated. Jul 15 23:12:16.917011 disk-uuid[685]: Secondary Entries is updated. Jul 15 23:12:16.917011 disk-uuid[685]: Secondary Header is updated. Jul 15 23:12:16.930370 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:12:16.950449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:12:17.953365 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 15 23:12:17.954313 disk-uuid[686]: The operation has completed successfully. Jul 15 23:12:18.134848 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:12:18.135409 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:12:18.220144 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:12:18.248802 sh[954]: Success Jul 15 23:12:18.277145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:12:18.277219 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:12:18.280320 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:12:18.291324 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:12:18.395648 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:12:18.402806 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:12:18.424980 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:12:18.447495 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:12:18.447558 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (977) Jul 15 23:12:18.452321 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:12:18.452373 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:12:18.452398 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:12:18.494919 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:12:18.499332 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:12:18.504224 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:12:18.509637 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:12:18.521833 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:12:18.572556 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1010) Jul 15 23:12:18.572616 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:12:18.574319 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:12:18.577219 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:12:18.602324 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:12:18.605395 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:12:18.610015 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:12:18.703511 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:12:18.713365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:12:18.799471 systemd-networkd[1147]: lo: Link UP Jul 15 23:12:18.799493 systemd-networkd[1147]: lo: Gained carrier Jul 15 23:12:18.806379 systemd-networkd[1147]: Enumeration completed Jul 15 23:12:18.806524 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:12:18.811397 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:12:18.811410 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:12:18.813530 systemd[1]: Reached target network.target - Network. Jul 15 23:12:18.833377 systemd-networkd[1147]: eth0: Link UP Jul 15 23:12:18.833384 systemd-networkd[1147]: eth0: Gained carrier Jul 15 23:12:18.833406 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:12:18.869600 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.27.40/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 23:12:18.940387 ignition[1082]: Ignition 2.21.0 Jul 15 23:12:18.940418 ignition[1082]: Stage: fetch-offline Jul 15 23:12:18.941310 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:18.941335 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:18.949013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:12:18.941923 ignition[1082]: Ignition finished successfully Jul 15 23:12:18.959203 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:12:19.014471 ignition[1159]: Ignition 2.21.0 Jul 15 23:12:19.014962 ignition[1159]: Stage: fetch Jul 15 23:12:19.022459 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:19.022496 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:19.022813 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:19.047991 ignition[1159]: PUT result: OK Jul 15 23:12:19.051344 ignition[1159]: parsed url from cmdline: "" Jul 15 23:12:19.051370 ignition[1159]: no config URL provided Jul 15 23:12:19.051386 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:12:19.051413 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:12:19.051447 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:19.053386 ignition[1159]: PUT result: OK Jul 15 23:12:19.054349 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 15 23:12:19.061365 ignition[1159]: GET result: OK Jul 15 23:12:19.061890 ignition[1159]: parsing config with SHA512: d534f0aedbe6e058aa2364b5d415e925ad075762b07694e9fc6b606634f3f45ea3b5f0842dbfb702cc093df2035a8c7a3c7d3c6452d12a1c8c66f9b82cdc2cee Jul 15 23:12:19.073336 unknown[1159]: fetched base config from "system" Jul 15 23:12:19.073357 unknown[1159]: fetched base config from "system" Jul 15 23:12:19.075748 ignition[1159]: fetch: fetch complete Jul 15 23:12:19.073371 unknown[1159]: fetched user config from "aws" Jul 15 23:12:19.075763 ignition[1159]: fetch: fetch passed Jul 15 23:12:19.075862 ignition[1159]: Ignition finished successfully Jul 15 23:12:19.088324 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:12:19.093051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:12:19.141863 ignition[1165]: Ignition 2.21.0 Jul 15 23:12:19.142429 ignition[1165]: Stage: kargs Jul 15 23:12:19.142977 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:19.143002 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:19.143170 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:19.152655 ignition[1165]: PUT result: OK Jul 15 23:12:19.161778 ignition[1165]: kargs: kargs passed Jul 15 23:12:19.161881 ignition[1165]: Ignition finished successfully Jul 15 23:12:19.167840 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:12:19.175469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:12:19.211714 ignition[1171]: Ignition 2.21.0 Jul 15 23:12:19.212236 ignition[1171]: Stage: disks Jul 15 23:12:19.212806 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:19.212838 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:19.212987 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:19.217894 ignition[1171]: PUT result: OK Jul 15 23:12:19.227339 ignition[1171]: disks: disks passed Jul 15 23:12:19.227455 ignition[1171]: Ignition finished successfully Jul 15 23:12:19.233098 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:12:19.237904 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:12:19.240754 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:12:19.249152 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:12:19.251595 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:12:19.258768 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:12:19.264450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:12:19.334386 systemd-fsck[1179]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:12:19.338105 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:12:19.343764 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:12:19.486306 kernel: EXT4-fs (nvme0n1p9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:12:19.487197 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:12:19.491524 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:12:19.498344 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:12:19.502027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:12:19.511518 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:12:19.511608 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:12:19.511659 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:12:19.538801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:12:19.543821 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:12:19.562293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1198) Jul 15 23:12:19.567800 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:12:19.567867 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:12:19.569476 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:12:19.579299 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:12:19.665332 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:12:19.675478 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:12:19.684113 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:12:19.692345 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:12:19.841844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:12:19.849808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:12:19.856401 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:12:19.882894 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:12:19.886549 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:12:19.921368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:12:19.935133 ignition[1312]: INFO : Ignition 2.21.0 Jul 15 23:12:19.935133 ignition[1312]: INFO : Stage: mount Jul 15 23:12:19.938681 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:19.938681 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:19.938681 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:19.946375 ignition[1312]: INFO : PUT result: OK Jul 15 23:12:19.952802 ignition[1312]: INFO : mount: mount passed Jul 15 23:12:19.954845 ignition[1312]: INFO : Ignition finished successfully Jul 15 23:12:19.958337 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:12:19.964667 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:12:20.490365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:12:20.535302 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1324) Jul 15 23:12:20.539612 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:12:20.539668 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:12:20.540937 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 15 23:12:20.549062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:12:20.601670 ignition[1341]: INFO : Ignition 2.21.0 Jul 15 23:12:20.601670 ignition[1341]: INFO : Stage: files Jul 15 23:12:20.606654 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:20.606654 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:20.606654 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:20.606654 ignition[1341]: INFO : PUT result: OK Jul 15 23:12:20.618468 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:12:20.622081 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:12:20.622081 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:12:20.633533 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:12:20.636877 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:12:20.640336 unknown[1341]: wrote ssh authorized keys file for user: core Jul 15 23:12:20.643015 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:12:20.647761 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:12:20.647761 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 23:12:20.730358 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:12:20.769405 systemd-networkd[1147]: eth0: Gained IPv6LL Jul 15 23:12:20.893578 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:12:20.899022 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:12:20.930914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:12:20.930914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:12:20.930914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:12:20.930914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:12:20.930914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:12:20.930914 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 23:12:21.616423 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:12:21.974680 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:12:21.979492 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:12:21.979492 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:12:21.988226 ignition[1341]: INFO : files: files passed Jul 15 23:12:21.988226 ignition[1341]: INFO : Ignition finished successfully Jul 15 23:12:22.010543 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:12:22.015873 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:12:22.036879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:12:22.046565 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:12:22.046787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:12:22.074584 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:12:22.074584 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:12:22.081843 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:12:22.080113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:12:22.090227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:12:22.097470 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:12:22.199341 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:12:22.200316 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:12:22.207423 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:12:22.209861 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:12:22.216772 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:12:22.220442 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:12:22.263198 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:12:22.271134 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:12:22.324520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:12:22.330067 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:12:22.333175 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:12:22.339306 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:12:22.339549 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:12:22.346993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:12:22.351418 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:12:22.354085 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:12:22.360765 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:12:22.363474 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:12:22.370993 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:12:22.374095 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:12:22.380812 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:12:22.383803 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:12:22.391210 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:12:22.393826 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:12:22.399482 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:12:22.399707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:12:22.405643 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:12:22.411326 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:12:22.414330 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:12:22.416431 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:12:22.420117 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:12:22.420373 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:12:22.430182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:12:22.430465 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:12:22.438246 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:12:22.438469 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:12:22.446684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:12:22.455152 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:12:22.465489 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:12:22.467502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:12:22.481692 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:12:22.482299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:12:22.510946 ignition[1395]: INFO : Ignition 2.21.0 Jul 15 23:12:22.510946 ignition[1395]: INFO : Stage: umount Jul 15 23:12:22.516349 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:12:22.516349 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 15 23:12:22.516349 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 15 23:12:22.512781 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:12:22.513235 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:12:22.533910 ignition[1395]: INFO : PUT result: OK Jul 15 23:12:22.522244 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:12:22.540922 ignition[1395]: INFO : umount: umount passed Jul 15 23:12:22.546429 ignition[1395]: INFO : Ignition finished successfully Jul 15 23:12:22.546143 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:12:22.550656 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:12:22.557023 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:12:22.557126 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:12:22.559768 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:12:22.559899 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:12:22.566343 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:12:22.566885 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:12:22.570763 systemd[1]: Stopped target network.target - Network. Jul 15 23:12:22.573703 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:12:22.573808 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:12:22.581816 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:12:22.584113 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:12:22.592660 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:12:22.595445 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:12:22.598082 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:12:22.608284 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:12:22.609121 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:12:22.618991 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:12:22.619073 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:12:22.621713 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:12:22.621810 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:12:22.631460 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:12:22.631557 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:12:22.634897 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:12:22.637653 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:12:22.663946 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:12:22.665649 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:12:22.671488 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:12:22.671912 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:12:22.672956 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:12:22.686378 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:12:22.690200 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:12:22.695141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:12:22.695225 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:12:22.703627 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:12:22.705772 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:12:22.705887 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:12:22.709258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:12:22.709476 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:12:22.720756 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:12:22.720859 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:12:22.728296 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:12:22.728403 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:12:22.731524 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:12:22.737637 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:12:22.737762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:12:22.760035 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:12:22.760280 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:12:22.771658 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:12:22.771788 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:12:22.791579 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:12:22.794485 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:12:22.798227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:12:22.798390 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:12:22.807925 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:12:22.808017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:12:22.812999 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:12:22.813091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:12:22.815694 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:12:22.815778 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:12:22.820939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:12:22.821032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:12:22.831543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:12:22.834403 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:12:22.834542 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:12:22.845595 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:12:22.845704 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:12:22.864279 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:12:22.864380 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:12:22.873345 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:12:22.873455 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:12:22.886966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:12:22.887076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:12:22.894818 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:12:22.894926 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:12:22.895009 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:12:22.895097 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:12:22.896112 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:12:22.896359 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:12:22.904773 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:12:22.906591 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:12:22.915757 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:12:22.929243 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:12:22.970909 systemd[1]: Switching root. Jul 15 23:12:23.024366 systemd-journald[258]: Journal stopped Jul 15 23:12:25.062499 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jul 15 23:12:25.062622 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:12:25.062662 kernel: SELinux: policy capability open_perms=1 Jul 15 23:12:25.062762 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:12:25.062799 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:12:25.062828 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:12:25.062857 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:12:25.062886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:12:25.062917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:12:25.062953 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:12:25.062983 kernel: audit: type=1403 audit(1752621143.257:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:12:25.063021 systemd[1]: Successfully loaded SELinux policy in 69.932ms. Jul 15 23:12:25.063067 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.617ms. Jul 15 23:12:25.063102 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:12:25.063133 systemd[1]: Detected virtualization amazon. Jul 15 23:12:25.063162 systemd[1]: Detected architecture arm64. Jul 15 23:12:25.063192 systemd[1]: Detected first boot. Jul 15 23:12:25.063222 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:12:25.063252 zram_generator::config[1438]: No configuration found. Jul 15 23:12:25.063872 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:12:25.063912 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:12:25.063951 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:12:25.063980 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:12:25.064010 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:12:25.064039 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:12:25.064090 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:12:25.064125 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:12:25.064290 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:12:25.064323 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:12:25.064358 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:12:25.064398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:12:25.064428 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:12:25.064457 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:12:25.064485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:12:25.064515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:12:25.065689 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:12:25.065733 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:12:25.065764 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:12:25.065808 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:12:25.065839 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:12:25.065867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:12:25.065898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:12:25.065928 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:12:25.065956 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:12:25.065984 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:12:25.066014 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:12:25.066658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:12:25.066696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:12:25.066725 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:12:25.066757 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:12:25.066787 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:12:25.066815 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:12:25.066845 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:12:25.066873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:12:25.067463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:12:25.067512 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:12:25.067551 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:12:25.067582 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:12:25.067609 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:12:25.067641 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:12:25.067671 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:12:25.067699 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:12:25.067725 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:12:25.067754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:12:25.067786 systemd[1]: Reached target machines.target - Containers. Jul 15 23:12:25.070047 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:12:25.070085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:12:25.070119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:12:25.070147 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:12:25.070175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:12:25.070205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:12:25.070235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:12:25.070289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:12:25.070320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:12:25.070349 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:12:25.070382 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:12:25.070409 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:12:25.070539 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:12:25.070577 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:12:25.070617 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:12:25.070650 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:12:25.070681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:12:25.070708 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:12:25.070735 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:12:25.070763 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:12:25.070789 kernel: loop: module loaded Jul 15 23:12:25.070817 kernel: fuse: init (API version 7.41) Jul 15 23:12:25.070844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:12:25.070876 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:12:25.070906 systemd[1]: Stopped verity-setup.service. Jul 15 23:12:25.070935 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:12:25.070967 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:12:25.071636 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:12:25.071677 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:12:25.071711 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:12:25.071742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:12:25.071774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:12:25.071802 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:12:25.071829 kernel: ACPI: bus type drm_connector registered Jul 15 23:12:25.071856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:12:25.071891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:12:25.071919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:12:25.071947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:12:25.071974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:12:25.072002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:12:25.072030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:12:25.073345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:12:25.073388 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:12:25.073417 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:12:25.073454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:12:25.073483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:12:25.073513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:12:25.073544 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:12:25.073572 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:12:25.073605 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:12:25.073633 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:12:25.073660 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:12:25.073692 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:12:25.073723 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:12:25.073755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:12:25.073783 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:12:25.083004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:12:25.083064 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:12:25.083095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:12:25.083125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:12:25.083201 systemd-journald[1521]: Collecting audit messages is disabled. Jul 15 23:12:25.083260 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:12:25.083313 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:12:25.083371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:12:25.083409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:12:25.083438 systemd-journald[1521]: Journal started Jul 15 23:12:25.083487 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec2183ce73b9bf96e87bcaf27f05dcb8) is 8M, max 75.3M, 67.3M free. Jul 15 23:12:24.295388 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:12:25.091691 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:12:24.310941 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 15 23:12:24.311771 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:12:25.091071 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:12:25.097913 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:12:25.115629 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:12:25.119518 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:12:25.191980 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:12:25.195170 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:12:25.208164 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:12:25.239492 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec2183ce73b9bf96e87bcaf27f05dcb8 is 124.235ms for 933 entries. Jul 15 23:12:25.239492 systemd-journald[1521]: System Journal (/var/log/journal/ec2183ce73b9bf96e87bcaf27f05dcb8) is 8M, max 195.6M, 187.6M free. Jul 15 23:12:25.378458 systemd-journald[1521]: Received client request to flush runtime journal. Jul 15 23:12:25.378529 kernel: loop0: detected capacity change from 0 to 138376 Jul 15 23:12:25.378563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:12:25.378595 kernel: loop1: detected capacity change from 0 to 61240 Jul 15 23:12:25.283086 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:12:25.296965 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jul 15 23:12:25.296989 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jul 15 23:12:25.298531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:12:25.305725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:12:25.318255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:12:25.325083 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:12:25.335692 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:12:25.388354 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:12:25.431469 kernel: loop2: detected capacity change from 0 to 207008 Jul 15 23:12:25.477170 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:12:25.482660 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:12:25.501314 kernel: loop3: detected capacity change from 0 to 107312 Jul 15 23:12:25.553224 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 15 23:12:25.554377 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 15 23:12:25.563342 kernel: loop4: detected capacity change from 0 to 138376 Jul 15 23:12:25.565529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:12:25.597345 kernel: loop5: detected capacity change from 0 to 61240 Jul 15 23:12:25.618294 kernel: loop6: detected capacity change from 0 to 207008 Jul 15 23:12:25.652303 kernel: loop7: detected capacity change from 0 to 107312 Jul 15 23:12:25.682369 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 15 23:12:25.686455 (sd-merge)[1599]: Merged extensions into '/usr'. Jul 15 23:12:25.699441 systemd[1]: Reload requested from client PID 1554 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:12:25.699475 systemd[1]: Reloading... Jul 15 23:12:25.904657 zram_generator::config[1630]: No configuration found. Jul 15 23:12:26.072492 ldconfig[1547]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:12:26.174977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:12:26.369006 systemd[1]: Reloading finished in 668 ms. Jul 15 23:12:26.390395 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:12:26.398813 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:12:26.415626 systemd[1]: Starting ensure-sysext.service... Jul 15 23:12:26.423524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:12:26.463760 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:12:26.463789 systemd[1]: Reloading... Jul 15 23:12:26.514574 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:12:26.514643 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:12:26.515184 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:12:26.516413 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:12:26.520463 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:12:26.521304 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Jul 15 23:12:26.521455 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Jul 15 23:12:26.532564 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:12:26.532734 systemd-tmpfiles[1680]: Skipping /boot Jul 15 23:12:26.559721 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:12:26.559893 systemd-tmpfiles[1680]: Skipping /boot Jul 15 23:12:26.617329 zram_generator::config[1710]: No configuration found. Jul 15 23:12:26.804431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:12:26.984142 systemd[1]: Reloading finished in 519 ms. Jul 15 23:12:27.011412 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:12:27.030833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:12:27.046894 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:12:27.062749 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:12:27.074753 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:12:27.091072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:12:27.102674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:12:27.112864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:12:27.127944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:12:27.132847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:12:27.145883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:12:27.155878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:12:27.161246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:12:27.161542 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:12:27.180012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:12:27.187843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:12:27.198632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:12:27.198893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:12:27.199250 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:12:27.223204 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:12:27.235370 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:12:27.279136 systemd[1]: Finished ensure-sysext.service. Jul 15 23:12:27.285008 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:12:27.297259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:12:27.298447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:12:27.306080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:12:27.306144 systemd-udevd[1772]: Using default interface naming scheme 'v255'. Jul 15 23:12:27.307403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:12:27.314553 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:12:27.315247 augenrules[1793]: No rules Jul 15 23:12:27.316695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:12:27.322139 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:12:27.322803 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:12:27.328925 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:12:27.329417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:12:27.348934 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:12:27.349608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:12:27.355734 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:12:27.358392 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:12:27.367072 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:12:27.393872 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:12:27.400326 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:12:27.414676 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:12:27.470650 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:12:27.622360 (udev-worker)[1814]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:12:27.673436 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:12:27.921204 systemd-networkd[1808]: lo: Link UP Jul 15 23:12:27.921225 systemd-networkd[1808]: lo: Gained carrier Jul 15 23:12:27.926069 systemd-networkd[1808]: Enumeration completed Jul 15 23:12:27.926397 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:12:27.929329 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:12:27.929338 systemd-networkd[1808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:12:27.935020 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:12:27.942785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:12:27.987732 systemd-networkd[1808]: eth0: Link UP Jul 15 23:12:28.010751 systemd-networkd[1808]: eth0: Gained carrier Jul 15 23:12:28.010801 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:12:28.061505 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:12:28.064487 systemd-networkd[1808]: eth0: DHCPv4 address 172.31.27.40/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 15 23:12:28.074337 systemd-resolved[1770]: Positive Trust Anchors: Jul 15 23:12:28.074373 systemd-resolved[1770]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:12:28.074437 systemd-resolved[1770]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:12:28.092826 systemd-resolved[1770]: Defaulting to hostname 'linux'. Jul 15 23:12:28.097571 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:12:28.100378 systemd[1]: Reached target network.target - Network. Jul 15 23:12:28.102557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:12:28.105380 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:12:28.107866 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:12:28.110624 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:12:28.113788 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:12:28.116372 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:12:28.119130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:12:28.122088 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:12:28.122147 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:12:28.124327 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:12:28.130525 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:12:28.135976 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:12:28.149314 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:12:28.153676 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:12:28.156585 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:12:28.173047 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:12:28.176203 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:12:28.181394 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:12:28.184671 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:12:28.186909 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:12:28.189017 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:12:28.189067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:12:28.192798 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:12:28.198808 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:12:28.203572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:12:28.208652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:12:28.217640 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:12:28.235678 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:12:28.236912 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:12:28.242080 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:12:28.251611 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:12:28.261597 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:12:28.273135 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:12:28.284782 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:12:28.294007 jq[1934]: false Jul 15 23:12:28.297571 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:12:28.308718 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:12:28.313225 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:12:28.314169 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:12:28.321636 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:12:28.332553 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:12:28.339238 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:12:28.342666 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:12:28.345355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:12:28.350008 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:12:28.352770 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:12:28.436975 tar[1950]: linux-arm64/LICENSE Jul 15 23:12:28.442289 tar[1950]: linux-arm64/helm Jul 15 23:12:28.468521 extend-filesystems[1935]: Found /dev/nvme0n1p6 Jul 15 23:12:28.495717 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:12:28.496170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:12:28.504180 jq[1947]: true Jul 15 23:12:28.517320 extend-filesystems[1935]: Found /dev/nvme0n1p9 Jul 15 23:12:28.533423 extend-filesystems[1935]: Checking size of /dev/nvme0n1p9 Jul 15 23:12:28.541916 (ntainerd)[1979]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:12:28.578380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:12:28.578064 dbus-daemon[1930]: [system] SELinux support is enabled Jul 15 23:12:28.586920 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:12:28.586986 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:12:28.589906 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:12:28.589947 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:12:28.605322 jq[1983]: true Jul 15 23:12:28.635307 extend-filesystems[1935]: Resized partition /dev/nvme0n1p9 Jul 15 23:12:28.645718 dbus-daemon[1930]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1808 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:12:28.652819 extend-filesystems[2001]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:12:28.656174 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:12:28.660688 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:12:28.671623 coreos-metadata[1929]: Jul 15 23:12:28.671 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 23:12:28.680751 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 15 23:12:28.682832 coreos-metadata[1929]: Jul 15 23:12:28.682 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 15 23:12:28.695369 coreos-metadata[1929]: Jul 15 23:12:28.695 INFO Fetch successful Jul 15 23:12:28.695369 coreos-metadata[1929]: Jul 15 23:12:28.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 15 23:12:28.699492 coreos-metadata[1929]: Jul 15 23:12:28.699 INFO Fetch successful Jul 15 23:12:28.699492 coreos-metadata[1929]: Jul 15 23:12:28.699 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 15 23:12:28.700965 coreos-metadata[1929]: Jul 15 23:12:28.700 INFO Fetch successful Jul 15 23:12:28.700965 coreos-metadata[1929]: Jul 15 23:12:28.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 15 23:12:28.706501 coreos-metadata[1929]: Jul 15 23:12:28.706 INFO Fetch successful Jul 15 23:12:28.706501 coreos-metadata[1929]: Jul 15 23:12:28.706 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 15 23:12:28.709173 coreos-metadata[1929]: Jul 15 23:12:28.709 INFO Fetch failed with 404: resource not found Jul 15 23:12:28.709173 coreos-metadata[1929]: Jul 15 23:12:28.709 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 15 23:12:28.713611 coreos-metadata[1929]: Jul 15 23:12:28.713 INFO Fetch successful Jul 15 23:12:28.713611 coreos-metadata[1929]: Jul 15 23:12:28.713 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 15 23:12:28.714810 coreos-metadata[1929]: Jul 15 23:12:28.714 INFO Fetch successful Jul 15 23:12:28.714810 coreos-metadata[1929]: Jul 15 23:12:28.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 15 23:12:28.720295 coreos-metadata[1929]: Jul 15 23:12:28.718 INFO Fetch successful Jul 15 23:12:28.720295 coreos-metadata[1929]: Jul 15 23:12:28.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 15 23:12:28.734352 update_engine[1945]: I20250715 23:12:28.728880 1945 main.cc:92] Flatcar Update Engine starting Jul 15 23:12:28.740873 coreos-metadata[1929]: Jul 15 23:12:28.732 INFO Fetch successful Jul 15 23:12:28.740873 coreos-metadata[1929]: Jul 15 23:12:28.732 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 15 23:12:28.740873 coreos-metadata[1929]: Jul 15 23:12:28.735 INFO Fetch successful Jul 15 23:12:28.736196 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:12:28.741155 update_engine[1945]: I20250715 23:12:28.740528 1945 update_check_scheduler.cc:74] Next update check in 3m4s Jul 15 23:12:28.741401 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:12:28.789309 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 15 23:12:28.810303 extend-filesystems[2001]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 15 23:12:28.810303 extend-filesystems[2001]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:12:28.810303 extend-filesystems[2001]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 15 23:12:28.823949 extend-filesystems[1935]: Resized filesystem in /dev/nvme0n1p9 Jul 15 23:12:28.820577 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:12:28.825646 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:12:28.833255 bash[2022]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:12:28.867363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:12:28.885383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 15 23:12:28.937642 systemd[1]: Starting sshkeys.service... Jul 15 23:12:28.944515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:12:29.003325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:12:29.006156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:12:29.050104 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:12:29.058424 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:12:29.196485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:12:29.223034 containerd[1979]: time="2025-07-15T23:12:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:12:29.244878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:12:29.250429 containerd[1979]: time="2025-07-15T23:12:29.249425037Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:12:29.327303 coreos-metadata[2045]: Jul 15 23:12:29.326 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 15 23:12:29.330117 coreos-metadata[2045]: Jul 15 23:12:29.330 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 15 23:12:29.333285 coreos-metadata[2045]: Jul 15 23:12:29.333 INFO Fetch successful Jul 15 23:12:29.333285 coreos-metadata[2045]: Jul 15 23:12:29.333 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 15 23:12:29.335958 coreos-metadata[2045]: Jul 15 23:12:29.335 INFO Fetch successful Jul 15 23:12:29.338055 unknown[2045]: wrote ssh authorized keys file for user: core Jul 15 23:12:29.344007 containerd[1979]: time="2025-07-15T23:12:29.342477405Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.26µs" Jul 15 23:12:29.344007 containerd[1979]: time="2025-07-15T23:12:29.342548937Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:12:29.344007 containerd[1979]: time="2025-07-15T23:12:29.342588681Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:12:29.344007 containerd[1979]: time="2025-07-15T23:12:29.342908133Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:12:29.346411 containerd[1979]: time="2025-07-15T23:12:29.342944457Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:12:29.346539 containerd[1979]: time="2025-07-15T23:12:29.346456461Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:12:29.346672 containerd[1979]: time="2025-07-15T23:12:29.346626909Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:12:29.346727 containerd[1979]: time="2025-07-15T23:12:29.346665129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:12:29.349901 containerd[1979]: time="2025-07-15T23:12:29.347239077Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:12:29.349901 containerd[1979]: time="2025-07-15T23:12:29.348805305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:12:29.349901 containerd[1979]: time="2025-07-15T23:12:29.348864789Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:12:29.349901 containerd[1979]: time="2025-07-15T23:12:29.348887445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:12:29.349901 containerd[1979]: time="2025-07-15T23:12:29.349110297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:12:29.352293 containerd[1979]: time="2025-07-15T23:12:29.351505809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:12:29.358301 containerd[1979]: time="2025-07-15T23:12:29.354374865Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:12:29.358301 containerd[1979]: time="2025-07-15T23:12:29.354425613Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:12:29.358301 containerd[1979]: time="2025-07-15T23:12:29.354486705Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:12:29.358301 containerd[1979]: time="2025-07-15T23:12:29.354912129Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:12:29.358301 containerd[1979]: time="2025-07-15T23:12:29.355092897Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:12:29.370251 containerd[1979]: time="2025-07-15T23:12:29.370192689Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:12:29.370838 containerd[1979]: time="2025-07-15T23:12:29.370588773Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:12:29.370838 containerd[1979]: time="2025-07-15T23:12:29.370661373Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:12:29.370838 containerd[1979]: time="2025-07-15T23:12:29.370700769Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:12:29.370838 containerd[1979]: time="2025-07-15T23:12:29.370731309Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:12:29.370838 containerd[1979]: time="2025-07-15T23:12:29.370767489Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:12:29.370838 containerd[1979]: time="2025-07-15T23:12:29.370799817Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372342153Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372395961Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372428901Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372454893Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372502557Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372757305Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372795921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372832665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372861873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372889521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372915993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372946965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.372979329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.373008921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:12:29.374299 containerd[1979]: time="2025-07-15T23:12:29.373040817Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:12:29.374973 containerd[1979]: time="2025-07-15T23:12:29.373067553Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:12:29.382794 containerd[1979]: time="2025-07-15T23:12:29.378403773Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:12:29.382794 containerd[1979]: time="2025-07-15T23:12:29.381198933Z" level=info msg="Start snapshots syncer" Jul 15 23:12:29.382794 containerd[1979]: time="2025-07-15T23:12:29.381320529Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:12:29.388467 containerd[1979]: time="2025-07-15T23:12:29.386072961Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:12:29.388467 containerd[1979]: time="2025-07-15T23:12:29.388324809Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:12:29.392634 containerd[1979]: time="2025-07-15T23:12:29.390428313Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:12:29.392634 containerd[1979]: time="2025-07-15T23:12:29.392510133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.393343881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.395819877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.395898513Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.395956413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.395994681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.396066585Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.396156873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.396193905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:12:29.396380 containerd[1979]: time="2025-07-15T23:12:29.396252369Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:12:29.399767 containerd[1979]: time="2025-07-15T23:12:29.398225842Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:12:29.399767 containerd[1979]: time="2025-07-15T23:12:29.398363686Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:12:29.399767 containerd[1979]: time="2025-07-15T23:12:29.399251794Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:12:29.399767 containerd[1979]: time="2025-07-15T23:12:29.399606406Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:12:29.399767 containerd[1979]: time="2025-07-15T23:12:29.399637882Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:12:29.399767 containerd[1979]: time="2025-07-15T23:12:29.399694750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:12:29.400837 containerd[1979]: time="2025-07-15T23:12:29.400354558Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:12:29.402328 containerd[1979]: time="2025-07-15T23:12:29.400949638Z" level=info msg="runtime interface created" Jul 15 23:12:29.402328 containerd[1979]: time="2025-07-15T23:12:29.400978354Z" level=info msg="created NRI interface" Jul 15 23:12:29.402958 containerd[1979]: time="2025-07-15T23:12:29.402528442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:12:29.402958 containerd[1979]: time="2025-07-15T23:12:29.402636178Z" level=info msg="Connect containerd service" Jul 15 23:12:29.402958 containerd[1979]: time="2025-07-15T23:12:29.402782686Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:12:29.403505 locksmithd[2015]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:12:29.409454 systemd-networkd[1808]: eth0: Gained IPv6LL Jul 15 23:12:29.414627 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:12:29.418398 containerd[1979]: time="2025-07-15T23:12:29.417718474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:12:29.418857 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:12:29.427695 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 15 23:12:29.434977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:29.442916 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:12:29.446309 update-ssh-keys[2057]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:12:29.451989 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:12:29.466175 systemd[1]: Finished sshkeys.service. Jul 15 23:12:29.666954 ntpd[1937]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:38 UTC 2025 (1): Starting Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:38 UTC 2025 (1): Starting Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: ---------------------------------------------------- Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: corporation. Support and training for ntp-4 are Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: available at https://www.nwtime.org/support Jul 15 23:12:29.667574 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: ---------------------------------------------------- Jul 15 23:12:29.667020 ntpd[1937]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:12:29.667039 ntpd[1937]: ---------------------------------------------------- Jul 15 23:12:29.667056 ntpd[1937]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:12:29.667074 ntpd[1937]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:12:29.667090 ntpd[1937]: corporation. Support and training for ntp-4 are Jul 15 23:12:29.667107 ntpd[1937]: available at https://www.nwtime.org/support Jul 15 23:12:29.667123 ntpd[1937]: ---------------------------------------------------- Jul 15 23:12:29.679015 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:12:29.690436 ntpd[1937]: proto: precision = 0.096 usec (-23) Jul 15 23:12:29.690598 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: proto: precision = 0.096 usec (-23) Jul 15 23:12:29.696112 ntpd[1937]: basedate set to 2025-07-03 Jul 15 23:12:29.696161 ntpd[1937]: gps base set to 2025-07-06 (week 2374) Jul 15 23:12:29.696369 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: basedate set to 2025-07-03 Jul 15 23:12:29.696369 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: gps base set to 2025-07-06 (week 2374) Jul 15 23:12:29.710856 ntpd[1937]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:12:29.710970 ntpd[1937]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:12:29.711096 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:12:29.711096 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:12:29.711245 ntpd[1937]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:12:29.725552 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:12:29.725672 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listen normally on 3 eth0 172.31.27.40:123 Jul 15 23:12:29.725672 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listen normally on 4 lo [::1]:123 Jul 15 23:12:29.725580 ntpd[1937]: Listen normally on 3 eth0 172.31.27.40:123 Jul 15 23:12:29.725848 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listen normally on 5 eth0 [fe80::453:d2ff:fefe:b32d%2]:123 Jul 15 23:12:29.725848 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: Listening on routing socket on fd #22 for interface updates Jul 15 23:12:29.725653 ntpd[1937]: Listen normally on 4 lo [::1]:123 Jul 15 23:12:29.725730 ntpd[1937]: Listen normally on 5 eth0 [fe80::453:d2ff:fefe:b32d%2]:123 Jul 15 23:12:29.725804 ntpd[1937]: Listening on routing socket on fd #22 for interface updates Jul 15 23:12:29.730296 amazon-ssm-agent[2065]: Initializing new seelog logger Jul 15 23:12:29.731378 amazon-ssm-agent[2065]: New Seelog Logger Creation Complete Jul 15 23:12:29.731984 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.732376 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.733353 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 processing appconfig overrides Jul 15 23:12:29.734130 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.734352 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.734501 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 processing appconfig overrides Jul 15 23:12:29.734790 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.734870 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.735185 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 processing appconfig overrides Jul 15 23:12:29.736472 amazon-ssm-agent[2065]: 2025-07-15 23:12:29.7340 INFO Proxy environment variables: Jul 15 23:12:29.750424 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.750424 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:29.750424 amazon-ssm-agent[2065]: 2025/07/15 23:12:29 processing appconfig overrides Jul 15 23:12:29.753639 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:12:29.760758 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:12:29.764563 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:12:29.764563 ntpd[1937]: 15 Jul 23:12:29 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:12:29.760819 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:12:29.841375 amazon-ssm-agent[2065]: 2025-07-15 23:12:29.7340 INFO no_proxy: Jul 15 23:12:29.849365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:12:29.938418 amazon-ssm-agent[2065]: 2025-07-15 23:12:29.7340 INFO https_proxy: Jul 15 23:12:30.016985 systemd-logind[1944]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:12:30.017044 systemd-logind[1944]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 15 23:12:30.017481 systemd-logind[1944]: New seat seat0. Jul 15 23:12:30.021384 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:12:30.044577 amazon-ssm-agent[2065]: 2025-07-15 23:12:29.7340 INFO http_proxy: Jul 15 23:12:30.098841 containerd[1979]: time="2025-07-15T23:12:30.098400405Z" level=info msg="Start subscribing containerd event" Jul 15 23:12:30.100303 containerd[1979]: time="2025-07-15T23:12:30.098986653Z" level=info msg="Start recovering state" Jul 15 23:12:30.100303 containerd[1979]: time="2025-07-15T23:12:30.099428097Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:12:30.100303 containerd[1979]: time="2025-07-15T23:12:30.099514701Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:12:30.100620 containerd[1979]: time="2025-07-15T23:12:30.100589841Z" level=info msg="Start event monitor" Jul 15 23:12:30.100765 containerd[1979]: time="2025-07-15T23:12:30.100737837Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:12:30.100859 containerd[1979]: time="2025-07-15T23:12:30.100834389Z" level=info msg="Start streaming server" Jul 15 23:12:30.100953 containerd[1979]: time="2025-07-15T23:12:30.100929885Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:12:30.101044 containerd[1979]: time="2025-07-15T23:12:30.101020521Z" level=info msg="runtime interface starting up..." Jul 15 23:12:30.101132 containerd[1979]: time="2025-07-15T23:12:30.101108709Z" level=info msg="starting plugins..." Jul 15 23:12:30.101246 containerd[1979]: time="2025-07-15T23:12:30.101223189Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:12:30.104734 containerd[1979]: time="2025-07-15T23:12:30.103608885Z" level=info msg="containerd successfully booted in 0.884789s" Jul 15 23:12:30.103753 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:12:30.149295 amazon-ssm-agent[2065]: 2025-07-15 23:12:29.7345 INFO Checking if agent identity type OnPrem can be assumed Jul 15 23:12:30.245422 amazon-ssm-agent[2065]: 2025-07-15 23:12:29.7346 INFO Checking if agent identity type EC2 can be assumed Jul 15 23:12:30.344843 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0384 INFO Agent will take identity from EC2 Jul 15 23:12:30.447156 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0536 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 15 23:12:30.550362 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0536 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 15 23:12:30.652286 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0536 INFO [amazon-ssm-agent] Starting Core Agent Jul 15 23:12:30.752393 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0536 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 15 23:12:30.852778 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0536 INFO [Registrar] Starting registrar module Jul 15 23:12:30.953330 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0615 INFO [EC2Identity] Checking disk for registration info Jul 15 23:12:30.994709 dbus-daemon[1930]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:12:30.996763 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:12:31.007145 dbus-daemon[1930]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2002 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:12:31.020538 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:12:31.059307 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0616 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 15 23:12:31.158248 amazon-ssm-agent[2065]: 2025-07-15 23:12:30.0616 INFO [EC2Identity] Generating registration keypair Jul 15 23:12:31.334094 polkitd[2187]: Started polkitd version 126 Jul 15 23:12:31.353888 polkitd[2187]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:12:31.356343 polkitd[2187]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:12:31.356426 polkitd[2187]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:12:31.357057 polkitd[2187]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:12:31.357119 polkitd[2187]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:12:31.357201 polkitd[2187]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:12:31.360209 polkitd[2187]: Finished loading, compiling and executing 2 rules Jul 15 23:12:31.360725 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:12:31.366424 dbus-daemon[1930]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:12:31.368063 polkitd[2187]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:12:31.400757 systemd-hostnamed[2002]: Hostname set to (transient) Jul 15 23:12:31.401423 systemd-resolved[1770]: System hostname changed to 'ip-172-31-27-40'. Jul 15 23:12:31.517280 tar[1950]: linux-arm64/README.md Jul 15 23:12:31.562575 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:12:32.170818 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.1705 INFO [EC2Identity] Checking write access before registering Jul 15 23:12:32.216820 amazon-ssm-agent[2065]: 2025/07/15 23:12:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:32.216992 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 15 23:12:32.217225 amazon-ssm-agent[2065]: 2025/07/15 23:12:32 processing appconfig overrides Jul 15 23:12:32.251685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.1713 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2165 INFO [EC2Identity] EC2 registration was successful. Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2166 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2167 INFO [CredentialRefresher] credentialRefresher has started Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2167 INFO [CredentialRefresher] Starting credentials refresher loop Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2515 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 15 23:12:32.255876 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2554 INFO [CredentialRefresher] Credentials ready Jul 15 23:12:32.271995 amazon-ssm-agent[2065]: 2025-07-15 23:12:32.2556 INFO [CredentialRefresher] Next credential rotation will be in 29.9999328153 minutes Jul 15 23:12:32.273839 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:32.839318 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:12:32.878161 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:12:32.885719 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:12:32.892066 systemd[1]: Started sshd@0-172.31.27.40:22-139.178.89.65:55664.service - OpenSSH per-connection server daemon (139.178.89.65:55664). Jul 15 23:12:32.925480 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:12:32.925898 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:12:32.935949 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:12:32.985023 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:12:32.992790 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:12:32.997957 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:12:33.004654 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:12:33.007074 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:12:33.009609 systemd[1]: Startup finished in 3.738s (kernel) + 8.568s (initrd) + 9.820s (userspace) = 22.127s. Jul 15 23:12:33.202029 sshd[2229]: Accepted publickey for core from 139.178.89.65 port 55664 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:33.206599 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:33.222102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:12:33.226251 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:12:33.245392 systemd-logind[1944]: New session 1 of user core. Jul 15 23:12:33.274932 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:12:33.286698 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:12:33.305567 amazon-ssm-agent[2065]: 2025-07-15 23:12:33.3030 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 15 23:12:33.307777 (systemd)[2247]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:12:33.317665 systemd-logind[1944]: New session c1 of user core. Jul 15 23:12:33.409803 amazon-ssm-agent[2065]: 2025-07-15 23:12:33.3070 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2248) started Jul 15 23:12:33.494738 kubelet[2215]: E0715 23:12:33.494619 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:33.501286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:33.504646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:33.507485 systemd[1]: kubelet.service: Consumed 1.400s CPU time, 256M memory peak. Jul 15 23:12:33.510399 amazon-ssm-agent[2065]: 2025-07-15 23:12:33.3070 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 15 23:12:33.696688 systemd[2247]: Queued start job for default target default.target. Jul 15 23:12:33.704589 systemd[2247]: Created slice app.slice - User Application Slice. Jul 15 23:12:33.704658 systemd[2247]: Reached target paths.target - Paths. Jul 15 23:12:33.704741 systemd[2247]: Reached target timers.target - Timers. Jul 15 23:12:33.707160 systemd[2247]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:12:33.749129 systemd[2247]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:12:33.749718 systemd[2247]: Reached target sockets.target - Sockets. Jul 15 23:12:33.749938 systemd[2247]: Reached target basic.target - Basic System. Jul 15 23:12:33.750129 systemd[2247]: Reached target default.target - Main User Target. Jul 15 23:12:33.750189 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:12:33.750441 systemd[2247]: Startup finished in 415ms. Jul 15 23:12:33.762214 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:12:33.913596 systemd[1]: Started sshd@1-172.31.27.40:22-139.178.89.65:55668.service - OpenSSH per-connection server daemon (139.178.89.65:55668). Jul 15 23:12:34.117830 sshd[2271]: Accepted publickey for core from 139.178.89.65 port 55668 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:34.120196 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:34.129645 systemd-logind[1944]: New session 2 of user core. Jul 15 23:12:34.140574 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:12:34.267657 sshd[2273]: Connection closed by 139.178.89.65 port 55668 Jul 15 23:12:34.266855 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:34.273614 systemd[1]: sshd@1-172.31.27.40:22-139.178.89.65:55668.service: Deactivated successfully. Jul 15 23:12:34.278048 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:12:34.279608 systemd-logind[1944]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:12:34.282812 systemd-logind[1944]: Removed session 2. Jul 15 23:12:34.301903 systemd[1]: Started sshd@2-172.31.27.40:22-139.178.89.65:55678.service - OpenSSH per-connection server daemon (139.178.89.65:55678). Jul 15 23:12:34.505735 sshd[2279]: Accepted publickey for core from 139.178.89.65 port 55678 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:34.508424 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:34.516699 systemd-logind[1944]: New session 3 of user core. Jul 15 23:12:34.532533 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:12:34.649168 sshd[2281]: Connection closed by 139.178.89.65 port 55678 Jul 15 23:12:34.650025 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:34.655745 systemd-logind[1944]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:12:34.657532 systemd[1]: sshd@2-172.31.27.40:22-139.178.89.65:55678.service: Deactivated successfully. Jul 15 23:12:34.660944 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:12:34.664184 systemd-logind[1944]: Removed session 3. Jul 15 23:12:34.686625 systemd[1]: Started sshd@3-172.31.27.40:22-139.178.89.65:55694.service - OpenSSH per-connection server daemon (139.178.89.65:55694). Jul 15 23:12:34.888302 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 55694 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:34.890304 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:34.899992 systemd-logind[1944]: New session 4 of user core. Jul 15 23:12:34.905573 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:12:35.032562 sshd[2289]: Connection closed by 139.178.89.65 port 55694 Jul 15 23:12:35.033587 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:35.038630 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:12:35.039740 systemd[1]: sshd@3-172.31.27.40:22-139.178.89.65:55694.service: Deactivated successfully. Jul 15 23:12:35.044760 systemd-logind[1944]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:12:35.047495 systemd-logind[1944]: Removed session 4. Jul 15 23:12:35.068755 systemd[1]: Started sshd@4-172.31.27.40:22-139.178.89.65:55696.service - OpenSSH per-connection server daemon (139.178.89.65:55696). Jul 15 23:12:35.269123 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 55696 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:35.271570 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:35.279391 systemd-logind[1944]: New session 5 of user core. Jul 15 23:12:35.292486 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:12:35.410799 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:12:35.411437 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:35.431536 sudo[2298]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:35.455369 sshd[2297]: Connection closed by 139.178.89.65 port 55696 Jul 15 23:12:35.456611 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:35.464718 systemd[1]: sshd@4-172.31.27.40:22-139.178.89.65:55696.service: Deactivated successfully. Jul 15 23:12:35.468004 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:12:35.469698 systemd-logind[1944]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:12:35.473108 systemd-logind[1944]: Removed session 5. Jul 15 23:12:35.494153 systemd[1]: Started sshd@5-172.31.27.40:22-139.178.89.65:55702.service - OpenSSH per-connection server daemon (139.178.89.65:55702). Jul 15 23:12:35.701519 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 55702 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:35.704300 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:35.713656 systemd-logind[1944]: New session 6 of user core. Jul 15 23:12:35.721542 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:12:35.826205 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:12:35.827397 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:35.834963 sudo[2308]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:35.844564 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:12:35.845166 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:35.863460 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:12:35.924570 augenrules[2330]: No rules Jul 15 23:12:35.927078 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:12:35.927621 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:12:35.930454 sudo[2307]: pam_unix(sudo:session): session closed for user root Jul 15 23:12:35.954479 sshd[2306]: Connection closed by 139.178.89.65 port 55702 Jul 15 23:12:35.955694 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jul 15 23:12:35.961866 systemd-logind[1944]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:12:35.962030 systemd[1]: sshd@5-172.31.27.40:22-139.178.89.65:55702.service: Deactivated successfully. Jul 15 23:12:35.966239 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:12:35.971369 systemd-logind[1944]: Removed session 6. Jul 15 23:12:35.992119 systemd[1]: Started sshd@6-172.31.27.40:22-139.178.89.65:55708.service - OpenSSH per-connection server daemon (139.178.89.65:55708). Jul 15 23:12:36.195814 sshd[2339]: Accepted publickey for core from 139.178.89.65 port 55708 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:12:36.198732 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:12:36.206314 systemd-logind[1944]: New session 7 of user core. Jul 15 23:12:36.225529 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:12:36.326216 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:12:36.326873 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:12:36.900917 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:12:36.913758 (dockerd)[2360]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:12:37.311249 dockerd[2360]: time="2025-07-15T23:12:37.310907647Z" level=info msg="Starting up" Jul 15 23:12:37.312475 dockerd[2360]: time="2025-07-15T23:12:37.312410594Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:12:37.453106 systemd[1]: var-lib-docker-metacopy\x2dcheck3501129084-merged.mount: Deactivated successfully. Jul 15 23:12:37.465931 dockerd[2360]: time="2025-07-15T23:12:37.465868285Z" level=info msg="Loading containers: start." Jul 15 23:12:37.480349 kernel: Initializing XFRM netlink socket Jul 15 23:12:37.786784 (udev-worker)[2385]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:12:37.860467 systemd-networkd[1808]: docker0: Link UP Jul 15 23:12:37.868310 dockerd[2360]: time="2025-07-15T23:12:37.868235319Z" level=info msg="Loading containers: done." Jul 15 23:12:37.891824 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3272639418-merged.mount: Deactivated successfully. Jul 15 23:12:37.900457 dockerd[2360]: time="2025-07-15T23:12:37.900306635Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:12:37.900656 dockerd[2360]: time="2025-07-15T23:12:37.900509224Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:12:37.900746 dockerd[2360]: time="2025-07-15T23:12:37.900699615Z" level=info msg="Initializing buildkit" Jul 15 23:12:37.944584 dockerd[2360]: time="2025-07-15T23:12:37.944517758Z" level=info msg="Completed buildkit initialization" Jul 15 23:12:37.961290 dockerd[2360]: time="2025-07-15T23:12:37.961140677Z" level=info msg="Daemon has completed initialization" Jul 15 23:12:37.961502 dockerd[2360]: time="2025-07-15T23:12:37.961458536Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:12:37.961750 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:12:39.182014 containerd[1979]: time="2025-07-15T23:12:39.181942390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 15 23:12:39.744424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528918396.mount: Deactivated successfully. Jul 15 23:12:41.099703 containerd[1979]: time="2025-07-15T23:12:41.099632917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:41.101437 containerd[1979]: time="2025-07-15T23:12:41.101386117Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327781" Jul 15 23:12:41.102455 containerd[1979]: time="2025-07-15T23:12:41.102359972Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:41.106901 containerd[1979]: time="2025-07-15T23:12:41.106801291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:41.109159 containerd[1979]: time="2025-07-15T23:12:41.108776194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.926766019s" Jul 15 23:12:41.109159 containerd[1979]: time="2025-07-15T23:12:41.108829597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Jul 15 23:12:41.110425 containerd[1979]: time="2025-07-15T23:12:41.110369751Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 15 23:12:42.711947 containerd[1979]: time="2025-07-15T23:12:42.711867715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:42.713775 containerd[1979]: time="2025-07-15T23:12:42.713718980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529696" Jul 15 23:12:42.715585 containerd[1979]: time="2025-07-15T23:12:42.715531586Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:42.722428 containerd[1979]: time="2025-07-15T23:12:42.722238654Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.611811742s" Jul 15 23:12:42.722662 containerd[1979]: time="2025-07-15T23:12:42.722497191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Jul 15 23:12:42.722830 containerd[1979]: time="2025-07-15T23:12:42.722358005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:42.724041 containerd[1979]: time="2025-07-15T23:12:42.723918281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 15 23:12:43.752133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:12:43.756595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:44.090294 containerd[1979]: time="2025-07-15T23:12:44.090225811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:44.094788 containerd[1979]: time="2025-07-15T23:12:44.094324960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484138" Jul 15 23:12:44.098544 containerd[1979]: time="2025-07-15T23:12:44.098001593Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:44.103474 containerd[1979]: time="2025-07-15T23:12:44.103395108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:44.105631 containerd[1979]: time="2025-07-15T23:12:44.105563872Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.381586774s" Jul 15 23:12:44.105631 containerd[1979]: time="2025-07-15T23:12:44.105624911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Jul 15 23:12:44.107181 containerd[1979]: time="2025-07-15T23:12:44.107134593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 15 23:12:44.121928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:44.134087 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:44.212384 kubelet[2636]: E0715 23:12:44.212325 2636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:44.219530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:44.219849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:44.221478 systemd[1]: kubelet.service: Consumed 322ms CPU time, 107.4M memory peak. Jul 15 23:12:45.363312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886684423.mount: Deactivated successfully. Jul 15 23:12:45.912693 containerd[1979]: time="2025-07-15T23:12:45.912503791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:45.914393 containerd[1979]: time="2025-07-15T23:12:45.914325582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378405" Jul 15 23:12:45.915600 containerd[1979]: time="2025-07-15T23:12:45.915524801Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:45.918545 containerd[1979]: time="2025-07-15T23:12:45.918464230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:45.920699 containerd[1979]: time="2025-07-15T23:12:45.920233794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.812857088s" Jul 15 23:12:45.920789 containerd[1979]: time="2025-07-15T23:12:45.920694080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Jul 15 23:12:45.921750 containerd[1979]: time="2025-07-15T23:12:45.921704217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:12:46.468296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461142320.mount: Deactivated successfully. Jul 15 23:12:47.644339 containerd[1979]: time="2025-07-15T23:12:47.644237261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:47.647428 containerd[1979]: time="2025-07-15T23:12:47.647363636Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 15 23:12:47.651284 containerd[1979]: time="2025-07-15T23:12:47.651186190Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:47.656831 containerd[1979]: time="2025-07-15T23:12:47.656747429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:47.658974 containerd[1979]: time="2025-07-15T23:12:47.658791391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.737032054s" Jul 15 23:12:47.658974 containerd[1979]: time="2025-07-15T23:12:47.658842741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 23:12:47.659713 containerd[1979]: time="2025-07-15T23:12:47.659675357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:12:48.148461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972843903.mount: Deactivated successfully. Jul 15 23:12:48.163307 containerd[1979]: time="2025-07-15T23:12:48.162555230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:12:48.165374 containerd[1979]: time="2025-07-15T23:12:48.165335772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 15 23:12:48.168027 containerd[1979]: time="2025-07-15T23:12:48.167952324Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:12:48.172385 containerd[1979]: time="2025-07-15T23:12:48.172338079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:12:48.173649 containerd[1979]: time="2025-07-15T23:12:48.173591446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 513.726538ms" Jul 15 23:12:48.173768 containerd[1979]: time="2025-07-15T23:12:48.173649111Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:12:48.174801 containerd[1979]: time="2025-07-15T23:12:48.174523496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 23:12:48.792682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279352186.mount: Deactivated successfully. Jul 15 23:12:51.564044 containerd[1979]: time="2025-07-15T23:12:51.563974850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:51.565579 containerd[1979]: time="2025-07-15T23:12:51.565503982Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 15 23:12:51.566866 containerd[1979]: time="2025-07-15T23:12:51.566755031Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:51.572076 containerd[1979]: time="2025-07-15T23:12:51.571974424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:12:51.574568 containerd[1979]: time="2025-07-15T23:12:51.574382707Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.399807718s" Jul 15 23:12:51.574568 containerd[1979]: time="2025-07-15T23:12:51.574433433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 23:12:54.234448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:12:54.239675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:54.588511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:54.600940 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:12:54.679868 kubelet[2790]: E0715 23:12:54.679790 2790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:12:54.684422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:12:54.684756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:12:54.685480 systemd[1]: kubelet.service: Consumed 295ms CPU time, 105.1M memory peak. Jul 15 23:12:59.026402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:59.026747 systemd[1]: kubelet.service: Consumed 295ms CPU time, 105.1M memory peak. Jul 15 23:12:59.030646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:12:59.085255 systemd[1]: Reload requested from client PID 2804 ('systemctl') (unit session-7.scope)... Jul 15 23:12:59.085320 systemd[1]: Reloading... Jul 15 23:12:59.333398 zram_generator::config[2852]: No configuration found. Jul 15 23:12:59.530895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:12:59.789029 systemd[1]: Reloading finished in 703 ms. Jul 15 23:12:59.891473 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:12:59.891658 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:12:59.892153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:12:59.892240 systemd[1]: kubelet.service: Consumed 223ms CPU time, 95M memory peak. Jul 15 23:12:59.896787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:00.234939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:00.248834 (kubelet)[2912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:13:00.332841 kubelet[2912]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:00.334343 kubelet[2912]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:13:00.334343 kubelet[2912]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:00.334343 kubelet[2912]: I0715 23:13:00.333469 2912 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:13:01.438232 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:13:01.723862 kubelet[2912]: I0715 23:13:01.723711 2912 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:13:01.723862 kubelet[2912]: I0715 23:13:01.723762 2912 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:13:01.724811 kubelet[2912]: I0715 23:13:01.724243 2912 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:13:01.783751 kubelet[2912]: E0715 23:13:01.783666 2912 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:01.785472 kubelet[2912]: I0715 23:13:01.784600 2912 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:13:01.799884 kubelet[2912]: I0715 23:13:01.799840 2912 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:13:01.805876 kubelet[2912]: I0715 23:13:01.805838 2912 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:13:01.808063 kubelet[2912]: I0715 23:13:01.808009 2912 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:13:01.808508 kubelet[2912]: I0715 23:13:01.808190 2912 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:13:01.808868 kubelet[2912]: I0715 23:13:01.808845 2912 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:13:01.808966 kubelet[2912]: I0715 23:13:01.808949 2912 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:13:01.809396 kubelet[2912]: I0715 23:13:01.809377 2912 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:01.815400 kubelet[2912]: I0715 23:13:01.815367 2912 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:13:01.815549 kubelet[2912]: I0715 23:13:01.815529 2912 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:13:01.815684 kubelet[2912]: I0715 23:13:01.815666 2912 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:13:01.815795 kubelet[2912]: I0715 23:13:01.815776 2912 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:13:01.823845 kubelet[2912]: W0715 23:13:01.823755 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-40&limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:01.823997 kubelet[2912]: E0715 23:13:01.823869 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-40&limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:01.825321 kubelet[2912]: W0715 23:13:01.824622 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:01.825321 kubelet[2912]: E0715 23:13:01.824710 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:01.825321 kubelet[2912]: I0715 23:13:01.824831 2912 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:13:01.825879 kubelet[2912]: I0715 23:13:01.825838 2912 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:13:01.826101 kubelet[2912]: W0715 23:13:01.826068 2912 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:13:01.828288 kubelet[2912]: I0715 23:13:01.828213 2912 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:13:01.828288 kubelet[2912]: I0715 23:13:01.828294 2912 server.go:1287] "Started kubelet" Jul 15 23:13:01.838317 kubelet[2912]: I0715 23:13:01.837840 2912 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:13:01.839857 kubelet[2912]: E0715 23:13:01.839027 2912 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.40:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-40.18528fadd8b2ae05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-40,UID:ip-172-31-27-40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-40,},FirstTimestamp:2025-07-15 23:13:01.828247045 +0000 UTC m=+1.573059785,LastTimestamp:2025-07-15 23:13:01.828247045 +0000 UTC m=+1.573059785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-40,}" Jul 15 23:13:01.845428 kubelet[2912]: E0715 23:13:01.845387 2912 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:13:01.849113 kubelet[2912]: I0715 23:13:01.849058 2912 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:13:01.849619 kubelet[2912]: E0715 23:13:01.849575 2912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-40\" not found" Jul 15 23:13:01.849619 kubelet[2912]: I0715 23:13:01.849123 2912 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:13:01.852299 kubelet[2912]: I0715 23:13:01.851084 2912 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:13:01.852975 kubelet[2912]: I0715 23:13:01.852902 2912 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:13:01.853489 kubelet[2912]: I0715 23:13:01.853457 2912 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:13:01.854004 kubelet[2912]: I0715 23:13:01.853973 2912 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:13:01.854211 kubelet[2912]: I0715 23:13:01.849089 2912 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:13:01.855328 kubelet[2912]: I0715 23:13:01.855295 2912 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:13:01.856758 kubelet[2912]: I0715 23:13:01.856713 2912 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:13:01.857505 kubelet[2912]: E0715 23:13:01.857461 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-40?timeout=10s\": dial tcp 172.31.27.40:6443: connect: connection refused" interval="200ms" Jul 15 23:13:01.859155 kubelet[2912]: I0715 23:13:01.859119 2912 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:13:01.859371 kubelet[2912]: I0715 23:13:01.859349 2912 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:13:01.870296 kubelet[2912]: I0715 23:13:01.868799 2912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:13:01.870980 kubelet[2912]: I0715 23:13:01.870923 2912 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:13:01.870980 kubelet[2912]: I0715 23:13:01.870975 2912 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:13:01.871131 kubelet[2912]: I0715 23:13:01.871007 2912 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:13:01.871131 kubelet[2912]: I0715 23:13:01.871022 2912 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:13:01.871131 kubelet[2912]: E0715 23:13:01.871086 2912 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:13:01.886440 kubelet[2912]: W0715 23:13:01.886333 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:01.886710 kubelet[2912]: E0715 23:13:01.886438 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:01.897814 kubelet[2912]: W0715 23:13:01.897721 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:01.897977 kubelet[2912]: E0715 23:13:01.897829 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:01.914502 kubelet[2912]: I0715 23:13:01.914382 2912 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:13:01.914762 kubelet[2912]: I0715 23:13:01.914492 2912 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:13:01.914762 kubelet[2912]: I0715 23:13:01.914650 2912 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:01.918202 kubelet[2912]: I0715 23:13:01.918163 2912 policy_none.go:49] "None policy: Start" Jul 15 23:13:01.918202 kubelet[2912]: I0715 23:13:01.918205 2912 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:13:01.918408 kubelet[2912]: I0715 23:13:01.918231 2912 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:13:01.928553 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:13:01.946956 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:13:01.949876 kubelet[2912]: E0715 23:13:01.949827 2912 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-40\" not found" Jul 15 23:13:01.954846 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:13:01.971928 kubelet[2912]: E0715 23:13:01.971855 2912 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:13:01.976944 kubelet[2912]: I0715 23:13:01.975922 2912 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:13:01.976944 kubelet[2912]: I0715 23:13:01.976249 2912 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:13:01.976944 kubelet[2912]: I0715 23:13:01.976292 2912 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:13:01.979922 kubelet[2912]: I0715 23:13:01.977223 2912 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:13:01.979922 kubelet[2912]: E0715 23:13:01.979416 2912 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:13:01.979922 kubelet[2912]: E0715 23:13:01.979505 2912 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-40\" not found" Jul 15 23:13:02.059357 kubelet[2912]: E0715 23:13:02.059011 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-40?timeout=10s\": dial tcp 172.31.27.40:6443: connect: connection refused" interval="400ms" Jul 15 23:13:02.081131 kubelet[2912]: I0715 23:13:02.081061 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-40" Jul 15 23:13:02.081885 kubelet[2912]: E0715 23:13:02.081837 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.40:6443/api/v1/nodes\": dial tcp 172.31.27.40:6443: connect: connection refused" node="ip-172-31-27-40" Jul 15 23:13:02.122535 kubelet[2912]: E0715 23:13:02.122355 2912 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.40:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-40.18528fadd8b2ae05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-40,UID:ip-172-31-27-40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-40,},FirstTimestamp:2025-07-15 23:13:01.828247045 +0000 UTC m=+1.573059785,LastTimestamp:2025-07-15 23:13:01.828247045 +0000 UTC m=+1.573059785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-40,}" Jul 15 23:13:02.193852 systemd[1]: Created slice kubepods-burstable-pod2a2788714fb942e8a8a2d463095a81b5.slice - libcontainer container kubepods-burstable-pod2a2788714fb942e8a8a2d463095a81b5.slice. Jul 15 23:13:02.222450 kubelet[2912]: E0715 23:13:02.222412 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:02.232932 systemd[1]: Created slice kubepods-burstable-podfc90c4df7aa99f75273cfcc8319d5376.slice - libcontainer container kubepods-burstable-podfc90c4df7aa99f75273cfcc8319d5376.slice. Jul 15 23:13:02.240300 kubelet[2912]: E0715 23:13:02.240055 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:02.244695 systemd[1]: Created slice kubepods-burstable-podb5517b0cf8a6f9753e0a8044bba4107f.slice - libcontainer container kubepods-burstable-podb5517b0cf8a6f9753e0a8044bba4107f.slice. Jul 15 23:13:02.248826 kubelet[2912]: E0715 23:13:02.248712 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:02.258066 kubelet[2912]: I0715 23:13:02.257993 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5517b0cf8a6f9753e0a8044bba4107f-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-40\" (UID: \"b5517b0cf8a6f9753e0a8044bba4107f\") " pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:02.258066 kubelet[2912]: I0715 23:13:02.258058 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a2788714fb942e8a8a2d463095a81b5-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-40\" (UID: \"2a2788714fb942e8a8a2d463095a81b5\") " pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:02.258066 kubelet[2912]: I0715 23:13:02.258101 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a2788714fb942e8a8a2d463095a81b5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-40\" (UID: \"2a2788714fb942e8a8a2d463095a81b5\") " pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:02.258582 kubelet[2912]: I0715 23:13:02.258138 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:02.258582 kubelet[2912]: I0715 23:13:02.258186 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:02.258582 kubelet[2912]: I0715 23:13:02.258227 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:02.258582 kubelet[2912]: I0715 23:13:02.258290 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a2788714fb942e8a8a2d463095a81b5-ca-certs\") pod \"kube-apiserver-ip-172-31-27-40\" (UID: \"2a2788714fb942e8a8a2d463095a81b5\") " pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:02.258582 kubelet[2912]: I0715 23:13:02.258330 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:02.258821 kubelet[2912]: I0715 23:13:02.258366 2912 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:02.286972 kubelet[2912]: I0715 23:13:02.286469 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-40" Jul 15 23:13:02.286972 kubelet[2912]: E0715 23:13:02.286924 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.40:6443/api/v1/nodes\": dial tcp 172.31.27.40:6443: connect: connection refused" node="ip-172-31-27-40" Jul 15 23:13:02.460559 kubelet[2912]: E0715 23:13:02.460501 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-40?timeout=10s\": dial tcp 172.31.27.40:6443: connect: connection refused" interval="800ms" Jul 15 23:13:02.524774 containerd[1979]: time="2025-07-15T23:13:02.524611834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-40,Uid:2a2788714fb942e8a8a2d463095a81b5,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:02.541886 containerd[1979]: time="2025-07-15T23:13:02.541547546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-40,Uid:fc90c4df7aa99f75273cfcc8319d5376,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:02.551420 containerd[1979]: time="2025-07-15T23:13:02.551353734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-40,Uid:b5517b0cf8a6f9753e0a8044bba4107f,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:02.587118 containerd[1979]: time="2025-07-15T23:13:02.586691066Z" level=info msg="connecting to shim dbc1b06b73c014e14d30d580bcd74b6346ee0d6406f53f760ed1fb90e0962082" address="unix:///run/containerd/s/62ee0f41648e230e08c700b780dceaced28b4c7b8c9dc7c6d5819d25c7070656" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:02.640593 containerd[1979]: time="2025-07-15T23:13:02.640139198Z" level=info msg="connecting to shim 290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551" address="unix:///run/containerd/s/28d6bbddb1c9637289c6b5650e1adad98a3dd694241d8d42d1070de0581037e1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:02.665480 containerd[1979]: time="2025-07-15T23:13:02.664450111Z" level=info msg="connecting to shim a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc" address="unix:///run/containerd/s/8ffac4e307ec529794471b4c303479be398fdc82c69a20044b1c6fecbee5f0cb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:02.689644 systemd[1]: Started cri-containerd-dbc1b06b73c014e14d30d580bcd74b6346ee0d6406f53f760ed1fb90e0962082.scope - libcontainer container dbc1b06b73c014e14d30d580bcd74b6346ee0d6406f53f760ed1fb90e0962082. Jul 15 23:13:02.692115 kubelet[2912]: I0715 23:13:02.692047 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-40" Jul 15 23:13:02.694798 kubelet[2912]: E0715 23:13:02.694677 2912 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.40:6443/api/v1/nodes\": dial tcp 172.31.27.40:6443: connect: connection refused" node="ip-172-31-27-40" Jul 15 23:13:02.713631 systemd[1]: Started cri-containerd-290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551.scope - libcontainer container 290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551. Jul 15 23:13:02.745594 systemd[1]: Started cri-containerd-a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc.scope - libcontainer container a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc. Jul 15 23:13:02.854049 containerd[1979]: time="2025-07-15T23:13:02.853009037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-40,Uid:2a2788714fb942e8a8a2d463095a81b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbc1b06b73c014e14d30d580bcd74b6346ee0d6406f53f760ed1fb90e0962082\"" Jul 15 23:13:02.859075 containerd[1979]: time="2025-07-15T23:13:02.858723052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-40,Uid:fc90c4df7aa99f75273cfcc8319d5376,Namespace:kube-system,Attempt:0,} returns sandbox id \"290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551\"" Jul 15 23:13:02.862495 containerd[1979]: time="2025-07-15T23:13:02.862433627Z" level=info msg="CreateContainer within sandbox \"dbc1b06b73c014e14d30d580bcd74b6346ee0d6406f53f760ed1fb90e0962082\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:13:02.870936 containerd[1979]: time="2025-07-15T23:13:02.870848440Z" level=info msg="CreateContainer within sandbox \"290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:13:02.896106 containerd[1979]: time="2025-07-15T23:13:02.896040582Z" level=info msg="Container 9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:02.899552 containerd[1979]: time="2025-07-15T23:13:02.899403895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-40,Uid:b5517b0cf8a6f9753e0a8044bba4107f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc\"" Jul 15 23:13:02.904926 containerd[1979]: time="2025-07-15T23:13:02.904845122Z" level=info msg="CreateContainer within sandbox \"a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:13:02.908351 containerd[1979]: time="2025-07-15T23:13:02.908141466Z" level=info msg="Container 642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:02.910124 kubelet[2912]: W0715 23:13:02.909900 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-40&limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:02.910124 kubelet[2912]: E0715 23:13:02.909996 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-40&limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:02.919352 containerd[1979]: time="2025-07-15T23:13:02.919226365Z" level=info msg="CreateContainer within sandbox \"290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\"" Jul 15 23:13:02.920144 containerd[1979]: time="2025-07-15T23:13:02.920086727Z" level=info msg="StartContainer for \"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\"" Jul 15 23:13:02.922079 containerd[1979]: time="2025-07-15T23:13:02.922016944Z" level=info msg="connecting to shim 9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811" address="unix:///run/containerd/s/28d6bbddb1c9637289c6b5650e1adad98a3dd694241d8d42d1070de0581037e1" protocol=ttrpc version=3 Jul 15 23:13:02.938827 containerd[1979]: time="2025-07-15T23:13:02.938769828Z" level=info msg="CreateContainer within sandbox \"dbc1b06b73c014e14d30d580bcd74b6346ee0d6406f53f760ed1fb90e0962082\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3\"" Jul 15 23:13:02.942367 containerd[1979]: time="2025-07-15T23:13:02.941896011Z" level=info msg="StartContainer for \"642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3\"" Jul 15 23:13:02.943754 containerd[1979]: time="2025-07-15T23:13:02.943692637Z" level=info msg="Container 4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:02.947439 containerd[1979]: time="2025-07-15T23:13:02.947377698Z" level=info msg="connecting to shim 642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3" address="unix:///run/containerd/s/62ee0f41648e230e08c700b780dceaced28b4c7b8c9dc7c6d5819d25c7070656" protocol=ttrpc version=3 Jul 15 23:13:02.960740 systemd[1]: Started cri-containerd-9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811.scope - libcontainer container 9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811. Jul 15 23:13:02.997879 systemd[1]: Started cri-containerd-642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3.scope - libcontainer container 642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3. Jul 15 23:13:03.005302 containerd[1979]: time="2025-07-15T23:13:03.005063575Z" level=info msg="CreateContainer within sandbox \"a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\"" Jul 15 23:13:03.009421 containerd[1979]: time="2025-07-15T23:13:03.009361675Z" level=info msg="StartContainer for \"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\"" Jul 15 23:13:03.013311 containerd[1979]: time="2025-07-15T23:13:03.012542845Z" level=info msg="connecting to shim 4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093" address="unix:///run/containerd/s/8ffac4e307ec529794471b4c303479be398fdc82c69a20044b1c6fecbee5f0cb" protocol=ttrpc version=3 Jul 15 23:13:03.072602 systemd[1]: Started cri-containerd-4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093.scope - libcontainer container 4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093. Jul 15 23:13:03.149660 containerd[1979]: time="2025-07-15T23:13:03.149220362Z" level=info msg="StartContainer for \"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\" returns successfully" Jul 15 23:13:03.154752 kubelet[2912]: W0715 23:13:03.154691 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:03.156502 kubelet[2912]: E0715 23:13:03.156401 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:03.162983 containerd[1979]: time="2025-07-15T23:13:03.162917372Z" level=info msg="StartContainer for \"642f9eb4e2dab3fafd5293768012b66d9f79b426eefae32fb6a8b7171db764e3\" returns successfully" Jul 15 23:13:03.189709 kubelet[2912]: W0715 23:13:03.189598 2912 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.40:6443: connect: connection refused Jul 15 23:13:03.189709 kubelet[2912]: E0715 23:13:03.189666 2912 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.40:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:13:03.262359 kubelet[2912]: E0715 23:13:03.262165 2912 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-40?timeout=10s\": dial tcp 172.31.27.40:6443: connect: connection refused" interval="1.6s" Jul 15 23:13:03.265085 containerd[1979]: time="2025-07-15T23:13:03.264494664Z" level=info msg="StartContainer for \"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\" returns successfully" Jul 15 23:13:03.501296 kubelet[2912]: I0715 23:13:03.498312 2912 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-40" Jul 15 23:13:03.930972 kubelet[2912]: E0715 23:13:03.930613 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:03.940637 kubelet[2912]: E0715 23:13:03.940603 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:03.947011 kubelet[2912]: E0715 23:13:03.946955 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:04.951208 kubelet[2912]: E0715 23:13:04.950553 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:04.951208 kubelet[2912]: E0715 23:13:04.950702 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:04.951208 kubelet[2912]: E0715 23:13:04.950934 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:05.953047 kubelet[2912]: E0715 23:13:05.952986 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:05.954224 kubelet[2912]: E0715 23:13:05.954185 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:06.773917 kubelet[2912]: E0715 23:13:06.773814 2912 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:07.355547 kubelet[2912]: E0715 23:13:07.355487 2912 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-40\" not found" node="ip-172-31-27-40" Jul 15 23:13:07.555302 kubelet[2912]: I0715 23:13:07.554061 2912 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-40" Jul 15 23:13:07.650741 kubelet[2912]: I0715 23:13:07.650061 2912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:07.667451 kubelet[2912]: E0715 23:13:07.666486 2912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:07.667774 kubelet[2912]: I0715 23:13:07.667673 2912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:07.673927 kubelet[2912]: E0715 23:13:07.673812 2912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:07.673927 kubelet[2912]: I0715 23:13:07.673880 2912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:07.680330 kubelet[2912]: E0715 23:13:07.679917 2912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:07.764067 kubelet[2912]: I0715 23:13:07.763943 2912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:07.770040 kubelet[2912]: E0715 23:13:07.769982 2912 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:07.828420 kubelet[2912]: I0715 23:13:07.828368 2912 apiserver.go:52] "Watching apiserver" Jul 15 23:13:07.849984 kubelet[2912]: I0715 23:13:07.849932 2912 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:13:09.305980 kubelet[2912]: I0715 23:13:09.305929 2912 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:09.740492 systemd[1]: Reload requested from client PID 3182 ('systemctl') (unit session-7.scope)... Jul 15 23:13:09.740990 systemd[1]: Reloading... Jul 15 23:13:09.949523 zram_generator::config[3227]: No configuration found. Jul 15 23:13:10.136147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:13:10.426412 systemd[1]: Reloading finished in 684 ms. Jul 15 23:13:10.495640 kubelet[2912]: I0715 23:13:10.495550 2912 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:13:10.496100 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:10.514382 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:13:10.514994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:10.515091 systemd[1]: kubelet.service: Consumed 2.264s CPU time, 128.3M memory peak. Jul 15 23:13:10.518949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:13:10.878504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:13:10.900576 (kubelet)[3286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:13:10.996312 kubelet[3286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:10.996312 kubelet[3286]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:13:10.996312 kubelet[3286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:13:10.996312 kubelet[3286]: I0715 23:13:10.995744 3286 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:13:11.014728 kubelet[3286]: I0715 23:13:11.014675 3286 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:13:11.015333 kubelet[3286]: I0715 23:13:11.015259 3286 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:13:11.016072 kubelet[3286]: I0715 23:13:11.016041 3286 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:13:11.020582 kubelet[3286]: I0715 23:13:11.020457 3286 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:13:11.032738 kubelet[3286]: I0715 23:13:11.032694 3286 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:13:11.049920 kubelet[3286]: I0715 23:13:11.049722 3286 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:13:11.061016 kubelet[3286]: I0715 23:13:11.060802 3286 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:13:11.064242 kubelet[3286]: I0715 23:13:11.061241 3286 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:13:11.064242 kubelet[3286]: I0715 23:13:11.063425 3286 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:13:11.064242 kubelet[3286]: I0715 23:13:11.063967 3286 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:13:11.064242 kubelet[3286]: I0715 23:13:11.063991 3286 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:13:11.065793 kubelet[3286]: I0715 23:13:11.065366 3286 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:11.065793 kubelet[3286]: I0715 23:13:11.065704 3286 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:13:11.065793 kubelet[3286]: I0715 23:13:11.065734 3286 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:13:11.065793 kubelet[3286]: I0715 23:13:11.065788 3286 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:13:11.066069 kubelet[3286]: I0715 23:13:11.065820 3286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:13:11.075606 kubelet[3286]: I0715 23:13:11.075534 3286 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:13:11.083305 kubelet[3286]: I0715 23:13:11.082464 3286 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:13:11.083305 kubelet[3286]: I0715 23:13:11.083196 3286 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:13:11.083305 kubelet[3286]: I0715 23:13:11.083238 3286 server.go:1287] "Started kubelet" Jul 15 23:13:11.092802 kubelet[3286]: I0715 23:13:11.092236 3286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:13:11.100453 kubelet[3286]: I0715 23:13:11.100327 3286 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:13:11.104291 kubelet[3286]: I0715 23:13:11.103063 3286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:13:11.106411 kubelet[3286]: I0715 23:13:11.105146 3286 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:13:11.118834 kubelet[3286]: I0715 23:13:11.118782 3286 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:13:11.126289 kubelet[3286]: I0715 23:13:11.107058 3286 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:13:11.126289 kubelet[3286]: I0715 23:13:11.107084 3286 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:13:11.126289 kubelet[3286]: E0715 23:13:11.108562 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-40\" not found" Jul 15 23:13:11.126289 kubelet[3286]: I0715 23:13:11.125042 3286 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:13:11.130385 kubelet[3286]: I0715 23:13:11.129824 3286 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:13:11.131663 kubelet[3286]: I0715 23:13:11.131618 3286 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:13:11.134575 kubelet[3286]: I0715 23:13:11.105818 3286 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:13:11.163059 kubelet[3286]: I0715 23:13:11.162448 3286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:13:11.174301 kubelet[3286]: I0715 23:13:11.173724 3286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:13:11.174301 kubelet[3286]: I0715 23:13:11.173775 3286 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:13:11.174301 kubelet[3286]: I0715 23:13:11.173814 3286 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:13:11.174301 kubelet[3286]: I0715 23:13:11.173829 3286 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:13:11.174301 kubelet[3286]: E0715 23:13:11.173901 3286 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:13:11.205886 kubelet[3286]: I0715 23:13:11.205835 3286 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:13:11.207484 kubelet[3286]: E0715 23:13:11.207445 3286 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:13:11.234108 kubelet[3286]: E0715 23:13:11.233326 3286 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-40\" not found" Jul 15 23:13:11.276476 kubelet[3286]: E0715 23:13:11.274522 3286 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389530 3286 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389562 3286 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389597 3286 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389865 3286 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389886 3286 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389924 3286 policy_none.go:49] "None policy: Start" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389940 3286 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.389959 3286 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:13:11.390507 kubelet[3286]: I0715 23:13:11.390133 3286 state_mem.go:75] "Updated machine memory state" Jul 15 23:13:11.415780 kubelet[3286]: I0715 23:13:11.415720 3286 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:13:11.418245 kubelet[3286]: I0715 23:13:11.418186 3286 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:13:11.419041 kubelet[3286]: I0715 23:13:11.418229 3286 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:13:11.419135 kubelet[3286]: I0715 23:13:11.419111 3286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:13:11.429508 kubelet[3286]: E0715 23:13:11.429449 3286 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:13:11.476402 kubelet[3286]: I0715 23:13:11.476202 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:11.477783 kubelet[3286]: I0715 23:13:11.477651 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:11.479381 kubelet[3286]: I0715 23:13:11.479172 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:11.498155 kubelet[3286]: E0715 23:13:11.498078 3286 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-40\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:11.529471 kubelet[3286]: I0715 23:13:11.529414 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5517b0cf8a6f9753e0a8044bba4107f-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-40\" (UID: \"b5517b0cf8a6f9753e0a8044bba4107f\") " pod="kube-system/kube-scheduler-ip-172-31-27-40" Jul 15 23:13:11.530006 kubelet[3286]: I0715 23:13:11.529921 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a2788714fb942e8a8a2d463095a81b5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-40\" (UID: \"2a2788714fb942e8a8a2d463095a81b5\") " pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:11.530187 kubelet[3286]: I0715 23:13:11.530129 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:11.530399 kubelet[3286]: I0715 23:13:11.530336 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a2788714fb942e8a8a2d463095a81b5-ca-certs\") pod \"kube-apiserver-ip-172-31-27-40\" (UID: \"2a2788714fb942e8a8a2d463095a81b5\") " pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:11.530565 kubelet[3286]: I0715 23:13:11.530379 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a2788714fb942e8a8a2d463095a81b5-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-40\" (UID: \"2a2788714fb942e8a8a2d463095a81b5\") " pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:11.530783 kubelet[3286]: I0715 23:13:11.530718 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:11.531023 kubelet[3286]: I0715 23:13:11.530924 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:11.531255 kubelet[3286]: I0715 23:13:11.531177 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:11.531536 kubelet[3286]: I0715 23:13:11.531443 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc90c4df7aa99f75273cfcc8319d5376-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-40\" (UID: \"fc90c4df7aa99f75273cfcc8319d5376\") " pod="kube-system/kube-controller-manager-ip-172-31-27-40" Jul 15 23:13:11.551691 kubelet[3286]: I0715 23:13:11.551176 3286 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-40" Jul 15 23:13:11.569077 kubelet[3286]: I0715 23:13:11.569011 3286 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-40" Jul 15 23:13:11.569207 kubelet[3286]: I0715 23:13:11.569137 3286 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-40" Jul 15 23:13:12.075240 kubelet[3286]: I0715 23:13:12.074881 3286 apiserver.go:52] "Watching apiserver" Jul 15 23:13:12.126067 kubelet[3286]: I0715 23:13:12.125985 3286 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:13:12.312660 kubelet[3286]: I0715 23:13:12.312568 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-40" podStartSLOduration=1.312549205 podStartE2EDuration="1.312549205s" podCreationTimestamp="2025-07-15 23:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:12.312139788 +0000 UTC m=+1.403833616" watchObservedRunningTime="2025-07-15 23:13:12.312549205 +0000 UTC m=+1.404243021" Jul 15 23:13:12.316308 kubelet[3286]: I0715 23:13:12.314955 3286 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:12.328638 kubelet[3286]: E0715 23:13:12.328505 3286 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-40\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-40" Jul 15 23:13:12.365345 kubelet[3286]: I0715 23:13:12.363772 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-40" podStartSLOduration=3.363745359 podStartE2EDuration="3.363745359s" podCreationTimestamp="2025-07-15 23:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:12.336703092 +0000 UTC m=+1.428397004" watchObservedRunningTime="2025-07-15 23:13:12.363745359 +0000 UTC m=+1.455439187" Jul 15 23:13:12.365345 kubelet[3286]: I0715 23:13:12.364003 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-40" podStartSLOduration=1.363992778 podStartE2EDuration="1.363992778s" podCreationTimestamp="2025-07-15 23:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:12.360703891 +0000 UTC m=+1.452397719" watchObservedRunningTime="2025-07-15 23:13:12.363992778 +0000 UTC m=+1.455686594" Jul 15 23:13:13.882759 update_engine[1945]: I20250715 23:13:13.882409 1945 update_attempter.cc:509] Updating boot flags... Jul 15 23:13:15.652065 kubelet[3286]: I0715 23:13:15.652027 3286 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:13:15.655633 containerd[1979]: time="2025-07-15T23:13:15.655559778Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:13:15.657361 kubelet[3286]: I0715 23:13:15.656554 3286 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:13:16.621316 systemd[1]: Created slice kubepods-besteffort-pod260920e4_a68a_435e_a0bc_f8e93c1de4d8.slice - libcontainer container kubepods-besteffort-pod260920e4_a68a_435e_a0bc_f8e93c1de4d8.slice. Jul 15 23:13:16.670598 kubelet[3286]: I0715 23:13:16.670552 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/260920e4-a68a-435e-a0bc-f8e93c1de4d8-lib-modules\") pod \"kube-proxy-f9rpn\" (UID: \"260920e4-a68a-435e-a0bc-f8e93c1de4d8\") " pod="kube-system/kube-proxy-f9rpn" Jul 15 23:13:16.671354 kubelet[3286]: I0715 23:13:16.670871 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/260920e4-a68a-435e-a0bc-f8e93c1de4d8-xtables-lock\") pod \"kube-proxy-f9rpn\" (UID: \"260920e4-a68a-435e-a0bc-f8e93c1de4d8\") " pod="kube-system/kube-proxy-f9rpn" Jul 15 23:13:16.672320 kubelet[3286]: I0715 23:13:16.672060 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/260920e4-a68a-435e-a0bc-f8e93c1de4d8-kube-proxy\") pod \"kube-proxy-f9rpn\" (UID: \"260920e4-a68a-435e-a0bc-f8e93c1de4d8\") " pod="kube-system/kube-proxy-f9rpn" Jul 15 23:13:16.672320 kubelet[3286]: I0715 23:13:16.672203 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fq8\" (UniqueName: \"kubernetes.io/projected/260920e4-a68a-435e-a0bc-f8e93c1de4d8-kube-api-access-x8fq8\") pod \"kube-proxy-f9rpn\" (UID: \"260920e4-a68a-435e-a0bc-f8e93c1de4d8\") " pod="kube-system/kube-proxy-f9rpn" Jul 15 23:13:16.766791 systemd[1]: Created slice kubepods-besteffort-pod40a38156_cce1_4cc4_a234_8d982eb5d3a7.slice - libcontainer container kubepods-besteffort-pod40a38156_cce1_4cc4_a234_8d982eb5d3a7.slice. Jul 15 23:13:16.775394 kubelet[3286]: I0715 23:13:16.775338 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40a38156-cce1-4cc4-a234-8d982eb5d3a7-var-lib-calico\") pod \"tigera-operator-747864d56d-2f85n\" (UID: \"40a38156-cce1-4cc4-a234-8d982eb5d3a7\") " pod="tigera-operator/tigera-operator-747864d56d-2f85n" Jul 15 23:13:16.775571 kubelet[3286]: I0715 23:13:16.775498 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcqs\" (UniqueName: \"kubernetes.io/projected/40a38156-cce1-4cc4-a234-8d982eb5d3a7-kube-api-access-zgcqs\") pod \"tigera-operator-747864d56d-2f85n\" (UID: \"40a38156-cce1-4cc4-a234-8d982eb5d3a7\") " pod="tigera-operator/tigera-operator-747864d56d-2f85n" Jul 15 23:13:16.938483 containerd[1979]: time="2025-07-15T23:13:16.937896611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9rpn,Uid:260920e4-a68a-435e-a0bc-f8e93c1de4d8,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:16.975401 containerd[1979]: time="2025-07-15T23:13:16.974550821Z" level=info msg="connecting to shim 18937ae45a6e5385bbfc0d399fca87f1899c4c2cb8500bdbbfe0b280342fc77a" address="unix:///run/containerd/s/39c90a7e37780f558bc5dc8464501c8d70fcefcd565ca41e2babf523f90d1466" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:17.022594 systemd[1]: Started cri-containerd-18937ae45a6e5385bbfc0d399fca87f1899c4c2cb8500bdbbfe0b280342fc77a.scope - libcontainer container 18937ae45a6e5385bbfc0d399fca87f1899c4c2cb8500bdbbfe0b280342fc77a. Jul 15 23:13:17.073940 containerd[1979]: time="2025-07-15T23:13:17.073840754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9rpn,Uid:260920e4-a68a-435e-a0bc-f8e93c1de4d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"18937ae45a6e5385bbfc0d399fca87f1899c4c2cb8500bdbbfe0b280342fc77a\"" Jul 15 23:13:17.081537 containerd[1979]: time="2025-07-15T23:13:17.080646451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2f85n,Uid:40a38156-cce1-4cc4-a234-8d982eb5d3a7,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:13:17.082826 containerd[1979]: time="2025-07-15T23:13:17.082758247Z" level=info msg="CreateContainer within sandbox \"18937ae45a6e5385bbfc0d399fca87f1899c4c2cb8500bdbbfe0b280342fc77a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:13:17.118695 containerd[1979]: time="2025-07-15T23:13:17.118606698Z" level=info msg="Container 1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:17.130601 containerd[1979]: time="2025-07-15T23:13:17.130430219Z" level=info msg="connecting to shim 2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b" address="unix:///run/containerd/s/b55300d2e855e414d39ecb4d5d510f30262132432a28687904c2f04f563b434e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:17.142781 containerd[1979]: time="2025-07-15T23:13:17.142717892Z" level=info msg="CreateContainer within sandbox \"18937ae45a6e5385bbfc0d399fca87f1899c4c2cb8500bdbbfe0b280342fc77a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765\"" Jul 15 23:13:17.145711 containerd[1979]: time="2025-07-15T23:13:17.145640945Z" level=info msg="StartContainer for \"1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765\"" Jul 15 23:13:17.154075 containerd[1979]: time="2025-07-15T23:13:17.153933681Z" level=info msg="connecting to shim 1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765" address="unix:///run/containerd/s/39c90a7e37780f558bc5dc8464501c8d70fcefcd565ca41e2babf523f90d1466" protocol=ttrpc version=3 Jul 15 23:13:17.188614 systemd[1]: Started cri-containerd-2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b.scope - libcontainer container 2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b. Jul 15 23:13:17.216840 systemd[1]: Started cri-containerd-1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765.scope - libcontainer container 1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765. Jul 15 23:13:17.318847 containerd[1979]: time="2025-07-15T23:13:17.318777371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2f85n,Uid:40a38156-cce1-4cc4-a234-8d982eb5d3a7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b\"" Jul 15 23:13:17.324943 containerd[1979]: time="2025-07-15T23:13:17.323886908Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:13:17.359627 containerd[1979]: time="2025-07-15T23:13:17.359538305Z" level=info msg="StartContainer for \"1c3dda2abfd6ce88f654710422579d7b1db674276d6aa04fbcf445a75034f765\" returns successfully" Jul 15 23:13:18.384448 kubelet[3286]: I0715 23:13:18.384129 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9rpn" podStartSLOduration=2.383879667 podStartE2EDuration="2.383879667s" podCreationTimestamp="2025-07-15 23:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:13:18.383779765 +0000 UTC m=+7.475473605" watchObservedRunningTime="2025-07-15 23:13:18.383879667 +0000 UTC m=+7.475573495" Jul 15 23:13:18.465620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209935482.mount: Deactivated successfully. Jul 15 23:13:19.478311 containerd[1979]: time="2025-07-15T23:13:19.477774993Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:19.481060 containerd[1979]: time="2025-07-15T23:13:19.481002446Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 15 23:13:19.483880 containerd[1979]: time="2025-07-15T23:13:19.483778870Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:19.488341 containerd[1979]: time="2025-07-15T23:13:19.488189394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:19.489823 containerd[1979]: time="2025-07-15T23:13:19.489621806Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.164463726s" Jul 15 23:13:19.489823 containerd[1979]: time="2025-07-15T23:13:19.489679663Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 15 23:13:19.494454 containerd[1979]: time="2025-07-15T23:13:19.494380132Z" level=info msg="CreateContainer within sandbox \"2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:13:19.512186 containerd[1979]: time="2025-07-15T23:13:19.512091143Z" level=info msg="Container 577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:19.527140 containerd[1979]: time="2025-07-15T23:13:19.526996064Z" level=info msg="CreateContainer within sandbox \"2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\"" Jul 15 23:13:19.528916 containerd[1979]: time="2025-07-15T23:13:19.528852360Z" level=info msg="StartContainer for \"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\"" Jul 15 23:13:19.531551 containerd[1979]: time="2025-07-15T23:13:19.531434130Z" level=info msg="connecting to shim 577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e" address="unix:///run/containerd/s/b55300d2e855e414d39ecb4d5d510f30262132432a28687904c2f04f563b434e" protocol=ttrpc version=3 Jul 15 23:13:19.568602 systemd[1]: Started cri-containerd-577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e.scope - libcontainer container 577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e. Jul 15 23:13:19.632741 containerd[1979]: time="2025-07-15T23:13:19.632678183Z" level=info msg="StartContainer for \"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\" returns successfully" Jul 15 23:13:22.174933 kubelet[3286]: I0715 23:13:22.174829 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-2f85n" podStartSLOduration=4.004958589 podStartE2EDuration="6.174804485s" podCreationTimestamp="2025-07-15 23:13:16 +0000 UTC" firstStartedPulling="2025-07-15 23:13:17.321531163 +0000 UTC m=+6.413224979" lastFinishedPulling="2025-07-15 23:13:19.491377071 +0000 UTC m=+8.583070875" observedRunningTime="2025-07-15 23:13:20.390628338 +0000 UTC m=+9.482322154" watchObservedRunningTime="2025-07-15 23:13:22.174804485 +0000 UTC m=+11.266498301" Jul 15 23:13:28.267447 sudo[2342]: pam_unix(sudo:session): session closed for user root Jul 15 23:13:28.295332 sshd[2341]: Connection closed by 139.178.89.65 port 55708 Jul 15 23:13:28.296174 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jul 15 23:13:28.309041 systemd-logind[1944]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:13:28.309043 systemd[1]: sshd@6-172.31.27.40:22-139.178.89.65:55708.service: Deactivated successfully. Jul 15 23:13:28.323711 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:13:28.324186 systemd[1]: session-7.scope: Consumed 11.006s CPU time, 235.6M memory peak. Jul 15 23:13:28.332014 systemd-logind[1944]: Removed session 7. Jul 15 23:13:42.079104 systemd[1]: Created slice kubepods-besteffort-podd9012f69_0901_47e1_a070_b1d255fc80f8.slice - libcontainer container kubepods-besteffort-podd9012f69_0901_47e1_a070_b1d255fc80f8.slice. Jul 15 23:13:42.147542 kubelet[3286]: I0715 23:13:42.147483 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9012f69-0901-47e1-a070-b1d255fc80f8-tigera-ca-bundle\") pod \"calico-typha-fc94c88c4-lcf6l\" (UID: \"d9012f69-0901-47e1-a070-b1d255fc80f8\") " pod="calico-system/calico-typha-fc94c88c4-lcf6l" Jul 15 23:13:42.149792 kubelet[3286]: I0715 23:13:42.149578 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d9012f69-0901-47e1-a070-b1d255fc80f8-typha-certs\") pod \"calico-typha-fc94c88c4-lcf6l\" (UID: \"d9012f69-0901-47e1-a070-b1d255fc80f8\") " pod="calico-system/calico-typha-fc94c88c4-lcf6l" Jul 15 23:13:42.149792 kubelet[3286]: I0715 23:13:42.149700 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6m8n\" (UniqueName: \"kubernetes.io/projected/d9012f69-0901-47e1-a070-b1d255fc80f8-kube-api-access-c6m8n\") pod \"calico-typha-fc94c88c4-lcf6l\" (UID: \"d9012f69-0901-47e1-a070-b1d255fc80f8\") " pod="calico-system/calico-typha-fc94c88c4-lcf6l" Jul 15 23:13:42.385947 containerd[1979]: time="2025-07-15T23:13:42.385481066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc94c88c4-lcf6l,Uid:d9012f69-0901-47e1-a070-b1d255fc80f8,Namespace:calico-system,Attempt:0,}" Jul 15 23:13:42.464336 containerd[1979]: time="2025-07-15T23:13:42.463694621Z" level=info msg="connecting to shim 0073150cad96b197da3dc28a5f90dfc53a628766dacaf5576335fa8cef1a7873" address="unix:///run/containerd/s/40e49a9cc346226a0234f22a251e91f6ba8936f55ac6ba4d886acf6e2850011d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:42.544257 systemd[1]: Started cri-containerd-0073150cad96b197da3dc28a5f90dfc53a628766dacaf5576335fa8cef1a7873.scope - libcontainer container 0073150cad96b197da3dc28a5f90dfc53a628766dacaf5576335fa8cef1a7873. Jul 15 23:13:42.570499 systemd[1]: Created slice kubepods-besteffort-pod1bd2e25a_df19_4c01_bd5e_ba517139a294.slice - libcontainer container kubepods-besteffort-pod1bd2e25a_df19_4c01_bd5e_ba517139a294.slice. Jul 15 23:13:42.653742 kubelet[3286]: I0715 23:13:42.653596 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-flexvol-driver-host\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.653742 kubelet[3286]: I0715 23:13:42.653666 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-policysync\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.653742 kubelet[3286]: I0715 23:13:42.653709 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-xtables-lock\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.653742 kubelet[3286]: I0715 23:13:42.653743 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-cni-bin-dir\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654035 kubelet[3286]: I0715 23:13:42.653784 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-var-run-calico\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654035 kubelet[3286]: I0715 23:13:42.653821 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-cni-log-dir\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654035 kubelet[3286]: I0715 23:13:42.653854 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1bd2e25a-df19-4c01-bd5e-ba517139a294-node-certs\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654035 kubelet[3286]: I0715 23:13:42.653888 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bd2e25a-df19-4c01-bd5e-ba517139a294-tigera-ca-bundle\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654035 kubelet[3286]: I0715 23:13:42.653931 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-cni-net-dir\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654326 kubelet[3286]: I0715 23:13:42.653968 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-lib-modules\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654326 kubelet[3286]: I0715 23:13:42.654006 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bd2e25a-df19-4c01-bd5e-ba517139a294-var-lib-calico\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.654326 kubelet[3286]: I0715 23:13:42.654113 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn2fl\" (UniqueName: \"kubernetes.io/projected/1bd2e25a-df19-4c01-bd5e-ba517139a294-kube-api-access-bn2fl\") pod \"calico-node-h8ff6\" (UID: \"1bd2e25a-df19-4c01-bd5e-ba517139a294\") " pod="calico-system/calico-node-h8ff6" Jul 15 23:13:42.660671 kubelet[3286]: E0715 23:13:42.660577 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g96k" podUID="da728a4b-58e6-4528-95e7-1b142a44e9e3" Jul 15 23:13:42.754843 kubelet[3286]: I0715 23:13:42.754760 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/da728a4b-58e6-4528-95e7-1b142a44e9e3-varrun\") pod \"csi-node-driver-9g96k\" (UID: \"da728a4b-58e6-4528-95e7-1b142a44e9e3\") " pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:42.755725 kubelet[3286]: I0715 23:13:42.755690 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9b5d\" (UniqueName: \"kubernetes.io/projected/da728a4b-58e6-4528-95e7-1b142a44e9e3-kube-api-access-c9b5d\") pod \"csi-node-driver-9g96k\" (UID: \"da728a4b-58e6-4528-95e7-1b142a44e9e3\") " pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:42.756360 kubelet[3286]: I0715 23:13:42.756323 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da728a4b-58e6-4528-95e7-1b142a44e9e3-kubelet-dir\") pod \"csi-node-driver-9g96k\" (UID: \"da728a4b-58e6-4528-95e7-1b142a44e9e3\") " pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:42.756663 kubelet[3286]: I0715 23:13:42.756636 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da728a4b-58e6-4528-95e7-1b142a44e9e3-socket-dir\") pod \"csi-node-driver-9g96k\" (UID: \"da728a4b-58e6-4528-95e7-1b142a44e9e3\") " pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:42.757087 kubelet[3286]: I0715 23:13:42.756995 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da728a4b-58e6-4528-95e7-1b142a44e9e3-registration-dir\") pod \"csi-node-driver-9g96k\" (UID: \"da728a4b-58e6-4528-95e7-1b142a44e9e3\") " pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:42.758548 kubelet[3286]: E0715 23:13:42.758481 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.758548 kubelet[3286]: W0715 23:13:42.758525 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.758548 kubelet[3286]: E0715 23:13:42.758574 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.759686 kubelet[3286]: E0715 23:13:42.759165 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.759686 kubelet[3286]: W0715 23:13:42.759190 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.759686 kubelet[3286]: E0715 23:13:42.759220 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.761008 kubelet[3286]: E0715 23:13:42.760928 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.761008 kubelet[3286]: W0715 23:13:42.760966 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.761381 kubelet[3286]: E0715 23:13:42.761256 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.763700 kubelet[3286]: E0715 23:13:42.763662 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.764039 kubelet[3286]: W0715 23:13:42.763879 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.764039 kubelet[3286]: E0715 23:13:42.763964 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.764737 kubelet[3286]: E0715 23:13:42.764671 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.764737 kubelet[3286]: W0715 23:13:42.764703 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.765115 kubelet[3286]: E0715 23:13:42.765012 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.766424 kubelet[3286]: E0715 23:13:42.766387 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.767317 kubelet[3286]: W0715 23:13:42.766583 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.767676 kubelet[3286]: E0715 23:13:42.767607 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.769299 kubelet[3286]: E0715 23:13:42.768546 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.769631 kubelet[3286]: W0715 23:13:42.769448 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.769631 kubelet[3286]: E0715 23:13:42.769535 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.770247 kubelet[3286]: E0715 23:13:42.770202 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.770483 kubelet[3286]: W0715 23:13:42.770405 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.770554 kubelet[3286]: E0715 23:13:42.770485 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.772587 kubelet[3286]: E0715 23:13:42.772528 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.772832 kubelet[3286]: W0715 23:13:42.772753 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.772919 kubelet[3286]: E0715 23:13:42.772846 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.774300 kubelet[3286]: E0715 23:13:42.774220 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.774816 kubelet[3286]: W0715 23:13:42.774257 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.774816 kubelet[3286]: E0715 23:13:42.774770 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.776084 kubelet[3286]: E0715 23:13:42.776030 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.777399 kubelet[3286]: W0715 23:13:42.776317 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.777399 kubelet[3286]: E0715 23:13:42.776413 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.778084 kubelet[3286]: E0715 23:13:42.778052 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.778292 kubelet[3286]: W0715 23:13:42.778208 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.778380 kubelet[3286]: E0715 23:13:42.778303 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.779445 kubelet[3286]: E0715 23:13:42.779398 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.779730 kubelet[3286]: W0715 23:13:42.779613 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.779730 kubelet[3286]: E0715 23:13:42.779674 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.780332 kubelet[3286]: E0715 23:13:42.780209 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.781396 kubelet[3286]: W0715 23:13:42.780257 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.781396 kubelet[3286]: E0715 23:13:42.781419 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.781838 kubelet[3286]: E0715 23:13:42.781802 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.781838 kubelet[3286]: W0715 23:13:42.781832 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782058 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782129 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.783442 kubelet[3286]: W0715 23:13:42.782146 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782181 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782556 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.783442 kubelet[3286]: W0715 23:13:42.782576 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782599 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782908 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.783442 kubelet[3286]: W0715 23:13:42.782925 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.783442 kubelet[3286]: E0715 23:13:42.782944 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.784611 kubelet[3286]: E0715 23:13:42.784564 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.784611 kubelet[3286]: W0715 23:13:42.784602 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.784791 kubelet[3286]: E0715 23:13:42.784635 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.801017 kubelet[3286]: E0715 23:13:42.795502 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.801163 kubelet[3286]: W0715 23:13:42.801014 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.801163 kubelet[3286]: E0715 23:13:42.801056 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.818559 kubelet[3286]: E0715 23:13:42.818373 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.819131 kubelet[3286]: W0715 23:13:42.819013 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.820430 kubelet[3286]: E0715 23:13:42.820368 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.869530 kubelet[3286]: E0715 23:13:42.869492 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.870494 kubelet[3286]: W0715 23:13:42.869879 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.870494 kubelet[3286]: E0715 23:13:42.869928 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.873054 kubelet[3286]: E0715 23:13:42.872839 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.874454 kubelet[3286]: W0715 23:13:42.874391 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.874865 kubelet[3286]: E0715 23:13:42.874765 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.875426 kubelet[3286]: E0715 23:13:42.875377 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.875426 kubelet[3286]: W0715 23:13:42.875415 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.875580 kubelet[3286]: E0715 23:13:42.875460 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.876250 kubelet[3286]: E0715 23:13:42.876086 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.876607 kubelet[3286]: W0715 23:13:42.876247 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.877075 kubelet[3286]: E0715 23:13:42.876798 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.878095 kubelet[3286]: E0715 23:13:42.878034 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.878095 kubelet[3286]: W0715 23:13:42.878078 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.880363 kubelet[3286]: E0715 23:13:42.878231 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.880565 kubelet[3286]: E0715 23:13:42.880522 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.880625 kubelet[3286]: W0715 23:13:42.880560 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.880890 kubelet[3286]: E0715 23:13:42.880709 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.881401 kubelet[3286]: E0715 23:13:42.881323 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.881401 kubelet[3286]: W0715 23:13:42.881386 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.882323 kubelet[3286]: E0715 23:13:42.881613 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.882688 containerd[1979]: time="2025-07-15T23:13:42.882585225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8ff6,Uid:1bd2e25a-df19-4c01-bd5e-ba517139a294,Namespace:calico-system,Attempt:0,}" Jul 15 23:13:42.882777 kubelet[3286]: E0715 23:13:42.882750 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.882867 kubelet[3286]: W0715 23:13:42.882775 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.883289 kubelet[3286]: E0715 23:13:42.882989 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.884208 kubelet[3286]: E0715 23:13:42.883866 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.884208 kubelet[3286]: W0715 23:13:42.884018 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.884487 kubelet[3286]: E0715 23:13:42.884359 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.885867 kubelet[3286]: E0715 23:13:42.885658 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.885867 kubelet[3286]: W0715 23:13:42.885845 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.886530 kubelet[3286]: E0715 23:13:42.886187 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.887101 kubelet[3286]: E0715 23:13:42.887053 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.887101 kubelet[3286]: W0715 23:13:42.887078 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.887434 kubelet[3286]: E0715 23:13:42.887400 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.888426 kubelet[3286]: E0715 23:13:42.888374 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.888426 kubelet[3286]: W0715 23:13:42.888415 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.890087 kubelet[3286]: E0715 23:13:42.889522 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.890087 kubelet[3286]: E0715 23:13:42.890001 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.890087 kubelet[3286]: W0715 23:13:42.890025 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.891340 kubelet[3286]: E0715 23:13:42.890352 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.891620 kubelet[3286]: E0715 23:13:42.891578 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.891719 kubelet[3286]: W0715 23:13:42.891615 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.891971 kubelet[3286]: E0715 23:13:42.891869 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.893533 kubelet[3286]: E0715 23:13:42.893484 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.893533 kubelet[3286]: W0715 23:13:42.893534 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.895091 kubelet[3286]: E0715 23:13:42.895025 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.897645 kubelet[3286]: E0715 23:13:42.897575 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.897645 kubelet[3286]: W0715 23:13:42.897614 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.898583 kubelet[3286]: E0715 23:13:42.897740 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.898583 kubelet[3286]: E0715 23:13:42.898026 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.898583 kubelet[3286]: W0715 23:13:42.898042 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.898583 kubelet[3286]: E0715 23:13:42.898382 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.900092 kubelet[3286]: E0715 23:13:42.899537 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.900092 kubelet[3286]: W0715 23:13:42.899572 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.900380 kubelet[3286]: E0715 23:13:42.900234 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.900785 kubelet[3286]: E0715 23:13:42.900647 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.900785 kubelet[3286]: W0715 23:13:42.900680 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.902290 kubelet[3286]: E0715 23:13:42.901733 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.902290 kubelet[3286]: E0715 23:13:42.902129 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.902290 kubelet[3286]: W0715 23:13:42.902150 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.902290 kubelet[3286]: E0715 23:13:42.902207 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.906215 kubelet[3286]: E0715 23:13:42.903808 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.906215 kubelet[3286]: W0715 23:13:42.903851 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.906215 kubelet[3286]: E0715 23:13:42.905603 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.906215 kubelet[3286]: E0715 23:13:42.905719 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.906215 kubelet[3286]: W0715 23:13:42.905740 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.906567 kubelet[3286]: E0715 23:13:42.906253 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.906567 kubelet[3286]: W0715 23:13:42.906318 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.906567 kubelet[3286]: E0715 23:13:42.906346 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.908478 kubelet[3286]: E0715 23:13:42.907203 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.908478 kubelet[3286]: W0715 23:13:42.907244 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.908478 kubelet[3286]: E0715 23:13:42.907372 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.908478 kubelet[3286]: E0715 23:13:42.907485 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.908743 kubelet[3286]: E0715 23:13:42.908255 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.908743 kubelet[3286]: W0715 23:13:42.908592 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.908743 kubelet[3286]: E0715 23:13:42.908628 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.924747 containerd[1979]: time="2025-07-15T23:13:42.924679150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc94c88c4-lcf6l,Uid:d9012f69-0901-47e1-a070-b1d255fc80f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"0073150cad96b197da3dc28a5f90dfc53a628766dacaf5576335fa8cef1a7873\"" Jul 15 23:13:42.928620 containerd[1979]: time="2025-07-15T23:13:42.928561422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:13:42.955005 kubelet[3286]: E0715 23:13:42.954973 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:42.955005 kubelet[3286]: W0715 23:13:42.955042 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:42.955005 kubelet[3286]: E0715 23:13:42.955075 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:42.974666 containerd[1979]: time="2025-07-15T23:13:42.974523872Z" level=info msg="connecting to shim 3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35" address="unix:///run/containerd/s/cea08d3482979f11a51841da35d5fc318f2f1753e4889eecfa99395e055624c3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:13:43.043990 systemd[1]: Started cri-containerd-3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35.scope - libcontainer container 3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35. Jul 15 23:13:43.137089 containerd[1979]: time="2025-07-15T23:13:43.137003748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8ff6,Uid:1bd2e25a-df19-4c01-bd5e-ba517139a294,Namespace:calico-system,Attempt:0,} returns sandbox id \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\"" Jul 15 23:13:44.175308 kubelet[3286]: E0715 23:13:44.174657 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g96k" podUID="da728a4b-58e6-4528-95e7-1b142a44e9e3" Jul 15 23:13:44.332641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510445815.mount: Deactivated successfully. Jul 15 23:13:45.045298 containerd[1979]: time="2025-07-15T23:13:45.045017320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:45.047499 containerd[1979]: time="2025-07-15T23:13:45.047368323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 15 23:13:45.048527 containerd[1979]: time="2025-07-15T23:13:45.048422738Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:45.051199 containerd[1979]: time="2025-07-15T23:13:45.051118421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:45.052534 containerd[1979]: time="2025-07-15T23:13:45.052475699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.123627958s" Jul 15 23:13:45.052667 containerd[1979]: time="2025-07-15T23:13:45.052532211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 15 23:13:45.054631 containerd[1979]: time="2025-07-15T23:13:45.054295413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:13:45.089657 containerd[1979]: time="2025-07-15T23:13:45.089550948Z" level=info msg="CreateContainer within sandbox \"0073150cad96b197da3dc28a5f90dfc53a628766dacaf5576335fa8cef1a7873\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:13:45.103305 containerd[1979]: time="2025-07-15T23:13:45.101663345Z" level=info msg="Container b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:45.123533 containerd[1979]: time="2025-07-15T23:13:45.123459014Z" level=info msg="CreateContainer within sandbox \"0073150cad96b197da3dc28a5f90dfc53a628766dacaf5576335fa8cef1a7873\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69\"" Jul 15 23:13:45.126048 containerd[1979]: time="2025-07-15T23:13:45.125934050Z" level=info msg="StartContainer for \"b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69\"" Jul 15 23:13:45.128683 containerd[1979]: time="2025-07-15T23:13:45.128599202Z" level=info msg="connecting to shim b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69" address="unix:///run/containerd/s/40e49a9cc346226a0234f22a251e91f6ba8936f55ac6ba4d886acf6e2850011d" protocol=ttrpc version=3 Jul 15 23:13:45.162569 systemd[1]: Started cri-containerd-b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69.scope - libcontainer container b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69. Jul 15 23:13:45.250225 containerd[1979]: time="2025-07-15T23:13:45.250026410Z" level=info msg="StartContainer for \"b3c96f5ac3293896547d08eb4e139395b6bf1feb88557a0820b27bc846979f69\" returns successfully" Jul 15 23:13:45.535725 kubelet[3286]: E0715 23:13:45.535658 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.535725 kubelet[3286]: W0715 23:13:45.535704 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.536985 kubelet[3286]: E0715 23:13:45.535739 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.536985 kubelet[3286]: E0715 23:13:45.536556 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.536985 kubelet[3286]: W0715 23:13:45.536580 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.536985 kubelet[3286]: E0715 23:13:45.536649 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.537164 kubelet[3286]: E0715 23:13:45.537015 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.537164 kubelet[3286]: W0715 23:13:45.537034 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.537164 kubelet[3286]: E0715 23:13:45.537055 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.538739 kubelet[3286]: E0715 23:13:45.538203 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.538739 kubelet[3286]: W0715 23:13:45.538229 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.538739 kubelet[3286]: E0715 23:13:45.538258 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.539626 kubelet[3286]: E0715 23:13:45.539514 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.539626 kubelet[3286]: W0715 23:13:45.539540 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.539626 kubelet[3286]: E0715 23:13:45.539572 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.540174 kubelet[3286]: E0715 23:13:45.540135 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.540174 kubelet[3286]: W0715 23:13:45.540167 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.540362 kubelet[3286]: E0715 23:13:45.540193 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.541452 kubelet[3286]: E0715 23:13:45.541401 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.541452 kubelet[3286]: W0715 23:13:45.541442 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.541452 kubelet[3286]: E0715 23:13:45.541475 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.542754 kubelet[3286]: E0715 23:13:45.542702 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.542754 kubelet[3286]: W0715 23:13:45.542743 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.542882 kubelet[3286]: E0715 23:13:45.542777 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.544073 kubelet[3286]: E0715 23:13:45.543959 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.544073 kubelet[3286]: W0715 23:13:45.543998 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.544073 kubelet[3286]: E0715 23:13:45.544030 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.545519 kubelet[3286]: E0715 23:13:45.544459 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.545519 kubelet[3286]: W0715 23:13:45.544480 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.545519 kubelet[3286]: E0715 23:13:45.544505 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.545712 kubelet[3286]: E0715 23:13:45.545529 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.545712 kubelet[3286]: W0715 23:13:45.545553 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.545712 kubelet[3286]: E0715 23:13:45.545584 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.546606 kubelet[3286]: E0715 23:13:45.546559 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.546606 kubelet[3286]: W0715 23:13:45.546595 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.547082 kubelet[3286]: E0715 23:13:45.546625 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.547082 kubelet[3286]: E0715 23:13:45.546952 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.547082 kubelet[3286]: W0715 23:13:45.546971 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.547082 kubelet[3286]: E0715 23:13:45.546992 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.549548 kubelet[3286]: E0715 23:13:45.548589 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.549548 kubelet[3286]: W0715 23:13:45.548617 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.549548 kubelet[3286]: E0715 23:13:45.548648 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.549548 kubelet[3286]: E0715 23:13:45.548959 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.549548 kubelet[3286]: W0715 23:13:45.548975 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.549548 kubelet[3286]: E0715 23:13:45.548995 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.604386 kubelet[3286]: I0715 23:13:45.604240 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fc94c88c4-lcf6l" podStartSLOduration=1.477089141 podStartE2EDuration="3.604188081s" podCreationTimestamp="2025-07-15 23:13:42 +0000 UTC" firstStartedPulling="2025-07-15 23:13:42.927020128 +0000 UTC m=+32.018713944" lastFinishedPulling="2025-07-15 23:13:45.054119068 +0000 UTC m=+34.145812884" observedRunningTime="2025-07-15 23:13:45.604106813 +0000 UTC m=+34.695800617" watchObservedRunningTime="2025-07-15 23:13:45.604188081 +0000 UTC m=+34.695881897" Jul 15 23:13:45.611313 kubelet[3286]: E0715 23:13:45.611231 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.611541 kubelet[3286]: W0715 23:13:45.611464 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.611541 kubelet[3286]: E0715 23:13:45.611506 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.612339 kubelet[3286]: E0715 23:13:45.612239 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.612339 kubelet[3286]: W0715 23:13:45.612303 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.613316 kubelet[3286]: E0715 23:13:45.613180 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.614578 kubelet[3286]: E0715 23:13:45.614361 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.614578 kubelet[3286]: W0715 23:13:45.614519 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.615940 kubelet[3286]: E0715 23:13:45.615385 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.616985 kubelet[3286]: E0715 23:13:45.616906 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.616985 kubelet[3286]: W0715 23:13:45.616942 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.618376 kubelet[3286]: E0715 23:13:45.617245 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.619093 kubelet[3286]: E0715 23:13:45.619016 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.619093 kubelet[3286]: W0715 23:13:45.619050 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.619410 kubelet[3286]: E0715 23:13:45.619364 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.620756 kubelet[3286]: E0715 23:13:45.620575 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.621101 kubelet[3286]: W0715 23:13:45.620627 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.621101 kubelet[3286]: E0715 23:13:45.621047 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.623375 kubelet[3286]: E0715 23:13:45.623286 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.623375 kubelet[3286]: W0715 23:13:45.623331 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.623762 kubelet[3286]: E0715 23:13:45.623692 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.624493 kubelet[3286]: E0715 23:13:45.624415 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.624493 kubelet[3286]: W0715 23:13:45.624445 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.624820 kubelet[3286]: E0715 23:13:45.624789 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.625374 kubelet[3286]: E0715 23:13:45.625311 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.625634 kubelet[3286]: W0715 23:13:45.625339 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.626122 kubelet[3286]: E0715 23:13:45.625756 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.628837 kubelet[3286]: E0715 23:13:45.628456 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.629327 kubelet[3286]: W0715 23:13:45.629147 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.630316 kubelet[3286]: E0715 23:13:45.629512 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.632969 kubelet[3286]: E0715 23:13:45.632880 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.632969 kubelet[3286]: W0715 23:13:45.632920 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.633240 kubelet[3286]: E0715 23:13:45.633226 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.634497 kubelet[3286]: W0715 23:13:45.633244 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.634497 kubelet[3286]: E0715 23:13:45.634035 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.634497 kubelet[3286]: W0715 23:13:45.634063 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.634497 kubelet[3286]: E0715 23:13:45.634094 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.634497 kubelet[3286]: E0715 23:13:45.634138 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.634497 kubelet[3286]: E0715 23:13:45.634496 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.634814 kubelet[3286]: W0715 23:13:45.634515 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.634814 kubelet[3286]: E0715 23:13:45.634539 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.636161 kubelet[3286]: E0715 23:13:45.634963 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.636161 kubelet[3286]: E0715 23:13:45.635346 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.636161 kubelet[3286]: W0715 23:13:45.635491 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.636161 kubelet[3286]: E0715 23:13:45.635543 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.637759 kubelet[3286]: E0715 23:13:45.637710 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.637759 kubelet[3286]: W0715 23:13:45.637746 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.638303 kubelet[3286]: E0715 23:13:45.637917 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.639504 kubelet[3286]: E0715 23:13:45.639447 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.639504 kubelet[3286]: W0715 23:13:45.639489 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.639795 kubelet[3286]: E0715 23:13:45.639523 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:45.640170 kubelet[3286]: E0715 23:13:45.640133 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:45.640170 kubelet[3286]: W0715 23:13:45.640164 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:45.640357 kubelet[3286]: E0715 23:13:45.640189 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.175512 kubelet[3286]: E0715 23:13:46.175426 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g96k" podUID="da728a4b-58e6-4528-95e7-1b142a44e9e3" Jul 15 23:13:46.346112 containerd[1979]: time="2025-07-15T23:13:46.345953824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:46.347856 containerd[1979]: time="2025-07-15T23:13:46.347778124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 15 23:13:46.351306 containerd[1979]: time="2025-07-15T23:13:46.350794415Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:46.355834 containerd[1979]: time="2025-07-15T23:13:46.355787723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:46.357617 containerd[1979]: time="2025-07-15T23:13:46.357553613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.303203778s" Jul 15 23:13:46.357617 containerd[1979]: time="2025-07-15T23:13:46.357613787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 15 23:13:46.363407 containerd[1979]: time="2025-07-15T23:13:46.363347805Z" level=info msg="CreateContainer within sandbox \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:13:46.386143 containerd[1979]: time="2025-07-15T23:13:46.383453293Z" level=info msg="Container bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:46.407830 containerd[1979]: time="2025-07-15T23:13:46.407776212Z" level=info msg="CreateContainer within sandbox \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\"" Jul 15 23:13:46.409593 containerd[1979]: time="2025-07-15T23:13:46.409539497Z" level=info msg="StartContainer for \"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\"" Jul 15 23:13:46.415144 containerd[1979]: time="2025-07-15T23:13:46.414909433Z" level=info msg="connecting to shim bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c" address="unix:///run/containerd/s/cea08d3482979f11a51841da35d5fc318f2f1753e4889eecfa99395e055624c3" protocol=ttrpc version=3 Jul 15 23:13:46.458857 systemd[1]: Started cri-containerd-bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c.scope - libcontainer container bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c. Jul 15 23:13:46.559754 kubelet[3286]: E0715 23:13:46.559699 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.560245 kubelet[3286]: W0715 23:13:46.559740 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.560245 kubelet[3286]: E0715 23:13:46.559803 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.562182 kubelet[3286]: E0715 23:13:46.562043 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.562182 kubelet[3286]: W0715 23:13:46.562082 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.562182 kubelet[3286]: E0715 23:13:46.562174 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.563820 kubelet[3286]: E0715 23:13:46.563727 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.563820 kubelet[3286]: W0715 23:13:46.563764 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.563820 kubelet[3286]: E0715 23:13:46.563795 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.565606 kubelet[3286]: E0715 23:13:46.565394 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.565606 kubelet[3286]: W0715 23:13:46.565424 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.565606 kubelet[3286]: E0715 23:13:46.565456 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.566461 kubelet[3286]: E0715 23:13:46.566357 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.566461 kubelet[3286]: W0715 23:13:46.566399 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.566461 kubelet[3286]: E0715 23:13:46.566431 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.567497 kubelet[3286]: E0715 23:13:46.567425 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.567497 kubelet[3286]: W0715 23:13:46.567465 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.567497 kubelet[3286]: E0715 23:13:46.567497 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.568758 kubelet[3286]: E0715 23:13:46.568707 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.568758 kubelet[3286]: W0715 23:13:46.568747 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.568946 kubelet[3286]: E0715 23:13:46.568780 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.570089 kubelet[3286]: E0715 23:13:46.569979 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.570089 kubelet[3286]: W0715 23:13:46.570018 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.571358 kubelet[3286]: E0715 23:13:46.570172 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.571358 kubelet[3286]: E0715 23:13:46.570917 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.571358 kubelet[3286]: W0715 23:13:46.570942 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.571358 kubelet[3286]: E0715 23:13:46.571009 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.572430 kubelet[3286]: E0715 23:13:46.571634 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.572430 kubelet[3286]: W0715 23:13:46.571696 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.572430 kubelet[3286]: E0715 23:13:46.571723 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.573573 kubelet[3286]: E0715 23:13:46.572453 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.573573 kubelet[3286]: W0715 23:13:46.572653 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.573573 kubelet[3286]: E0715 23:13:46.572741 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.574055 kubelet[3286]: E0715 23:13:46.573941 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.574055 kubelet[3286]: W0715 23:13:46.574023 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.574222 kubelet[3286]: E0715 23:13:46.574095 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.575450 kubelet[3286]: E0715 23:13:46.575162 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.575746 kubelet[3286]: W0715 23:13:46.575667 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.575746 kubelet[3286]: E0715 23:13:46.575717 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.579707 kubelet[3286]: E0715 23:13:46.578373 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.579707 kubelet[3286]: W0715 23:13:46.578417 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.579707 kubelet[3286]: E0715 23:13:46.578453 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.579707 kubelet[3286]: E0715 23:13:46.579454 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.579707 kubelet[3286]: W0715 23:13:46.579485 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.579707 kubelet[3286]: E0715 23:13:46.579516 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.609993 containerd[1979]: time="2025-07-15T23:13:46.609894330Z" level=info msg="StartContainer for \"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\" returns successfully" Jul 15 23:13:46.626222 kubelet[3286]: E0715 23:13:46.626182 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.626653 kubelet[3286]: W0715 23:13:46.626401 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.626653 kubelet[3286]: E0715 23:13:46.626441 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.627507 kubelet[3286]: E0715 23:13:46.627477 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.627707 kubelet[3286]: W0715 23:13:46.627631 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.627859 kubelet[3286]: E0715 23:13:46.627799 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.628326 kubelet[3286]: E0715 23:13:46.628289 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.628326 kubelet[3286]: W0715 23:13:46.628321 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.628522 kubelet[3286]: E0715 23:13:46.628360 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.629068 kubelet[3286]: E0715 23:13:46.629001 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.629068 kubelet[3286]: W0715 23:13:46.629028 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.629354 kubelet[3286]: E0715 23:13:46.629258 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.630193 kubelet[3286]: E0715 23:13:46.630157 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.630193 kubelet[3286]: W0715 23:13:46.630191 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.631033 kubelet[3286]: E0715 23:13:46.630231 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.632839 kubelet[3286]: E0715 23:13:46.632802 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.633089 kubelet[3286]: W0715 23:13:46.632975 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.633232 kubelet[3286]: E0715 23:13:46.633187 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.633806 kubelet[3286]: E0715 23:13:46.633727 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:13:46.634091 kubelet[3286]: W0715 23:13:46.633922 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:13:46.634257 kubelet[3286]: E0715 23:13:46.634227 3286 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:13:46.638181 systemd[1]: cri-containerd-bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c.scope: Deactivated successfully. Jul 15 23:13:46.647329 containerd[1979]: time="2025-07-15T23:13:46.646705627Z" level=info msg="received exit event container_id:\"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\" id:\"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\" pid:4193 exited_at:{seconds:1752621226 nanos:646159162}" Jul 15 23:13:46.648830 containerd[1979]: time="2025-07-15T23:13:46.648635255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\" id:\"bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c\" pid:4193 exited_at:{seconds:1752621226 nanos:646159162}" Jul 15 23:13:46.688941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbb15f98b9c1361e1ad7e39d7a7533f131713f7b2ad3b9797b82a19962648a0c-rootfs.mount: Deactivated successfully. Jul 15 23:13:47.504695 containerd[1979]: time="2025-07-15T23:13:47.504649002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:13:48.175211 kubelet[3286]: E0715 23:13:48.175144 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g96k" podUID="da728a4b-58e6-4528-95e7-1b142a44e9e3" Jul 15 23:13:50.186027 kubelet[3286]: E0715 23:13:50.185970 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9g96k" podUID="da728a4b-58e6-4528-95e7-1b142a44e9e3" Jul 15 23:13:50.370677 containerd[1979]: time="2025-07-15T23:13:50.370595771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:50.372330 containerd[1979]: time="2025-07-15T23:13:50.372219379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 15 23:13:50.373645 containerd[1979]: time="2025-07-15T23:13:50.373567196Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:50.377047 containerd[1979]: time="2025-07-15T23:13:50.376968832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:50.378540 containerd[1979]: time="2025-07-15T23:13:50.378360148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.872924033s" Jul 15 23:13:50.378540 containerd[1979]: time="2025-07-15T23:13:50.378414127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 15 23:13:50.383051 containerd[1979]: time="2025-07-15T23:13:50.382979732Z" level=info msg="CreateContainer within sandbox \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:13:50.400295 containerd[1979]: time="2025-07-15T23:13:50.396535671Z" level=info msg="Container 8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:50.408726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039681306.mount: Deactivated successfully. Jul 15 23:13:50.415070 containerd[1979]: time="2025-07-15T23:13:50.415017047Z" level=info msg="CreateContainer within sandbox \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\"" Jul 15 23:13:50.417937 containerd[1979]: time="2025-07-15T23:13:50.417824338Z" level=info msg="StartContainer for \"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\"" Jul 15 23:13:50.421983 containerd[1979]: time="2025-07-15T23:13:50.421642630Z" level=info msg="connecting to shim 8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d" address="unix:///run/containerd/s/cea08d3482979f11a51841da35d5fc318f2f1753e4889eecfa99395e055624c3" protocol=ttrpc version=3 Jul 15 23:13:50.463615 systemd[1]: Started cri-containerd-8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d.scope - libcontainer container 8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d. Jul 15 23:13:50.560022 containerd[1979]: time="2025-07-15T23:13:50.559881516Z" level=info msg="StartContainer for \"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\" returns successfully" Jul 15 23:13:51.627490 systemd[1]: cri-containerd-8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d.scope: Deactivated successfully. Jul 15 23:13:51.628006 systemd[1]: cri-containerd-8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d.scope: Consumed 934ms CPU time, 186.1M memory peak, 165.8M written to disk. Jul 15 23:13:51.630401 containerd[1979]: time="2025-07-15T23:13:51.630208224Z" level=info msg="received exit event container_id:\"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\" id:\"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\" pid:4285 exited_at:{seconds:1752621231 nanos:629826829}" Jul 15 23:13:51.631970 containerd[1979]: time="2025-07-15T23:13:51.631901646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\" id:\"8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d\" pid:4285 exited_at:{seconds:1752621231 nanos:629826829}" Jul 15 23:13:51.647295 kubelet[3286]: I0715 23:13:51.646780 3286 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:13:51.718996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e0b5b66cfab0113279a06912f5813fa16b3faf8ecbb57a8bcc300aa1881706d-rootfs.mount: Deactivated successfully. Jul 15 23:13:51.759690 systemd[1]: Created slice kubepods-burstable-pod4a7d591a_2c05_4fd1_a124_09131db809b2.slice - libcontainer container kubepods-burstable-pod4a7d591a_2c05_4fd1_a124_09131db809b2.slice. Jul 15 23:13:51.805225 systemd[1]: Created slice kubepods-burstable-podd017922f_d4f0_435f_a351_9384dd832f62.slice - libcontainer container kubepods-burstable-podd017922f_d4f0_435f_a351_9384dd832f62.slice. Jul 15 23:13:51.837983 systemd[1]: Created slice kubepods-besteffort-pod7fa3748b_a119_4d81_ba60_a82a1b8307c6.slice - libcontainer container kubepods-besteffort-pod7fa3748b_a119_4d81_ba60_a82a1b8307c6.slice. Jul 15 23:13:51.865787 systemd[1]: Created slice kubepods-besteffort-pod15f3f713_eca6_4f48_a8cc_c3aba3011d1b.slice - libcontainer container kubepods-besteffort-pod15f3f713_eca6_4f48_a8cc_c3aba3011d1b.slice. Jul 15 23:13:51.873310 kubelet[3286]: I0715 23:13:51.868392 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bdcee783-50df-4b4a-aa3b-c71fef356049-calico-apiserver-certs\") pod \"calico-apiserver-5b6b6cb6bb-xlx2p\" (UID: \"bdcee783-50df-4b4a-aa3b-c71fef356049\") " pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-xlx2p" Jul 15 23:13:51.873310 kubelet[3286]: I0715 23:13:51.871745 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/113c6c0a-e718-45fb-a6c8-5faeddafbff6-goldmane-key-pair\") pod \"goldmane-768f4c5c69-bwq4x\" (UID: \"113c6c0a-e718-45fb-a6c8-5faeddafbff6\") " pod="calico-system/goldmane-768f4c5c69-bwq4x" Jul 15 23:13:51.873310 kubelet[3286]: I0715 23:13:51.871787 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkkx\" (UniqueName: \"kubernetes.io/projected/7fa3748b-a119-4d81-ba60-a82a1b8307c6-kube-api-access-kbkkx\") pod \"calico-apiserver-5b6b6cb6bb-jqf7f\" (UID: \"7fa3748b-a119-4d81-ba60-a82a1b8307c6\") " pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-jqf7f" Jul 15 23:13:51.873310 kubelet[3286]: I0715 23:13:51.871838 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a7d591a-2c05-4fd1-a124-09131db809b2-config-volume\") pod \"coredns-668d6bf9bc-k84vb\" (UID: \"4a7d591a-2c05-4fd1-a124-09131db809b2\") " pod="kube-system/coredns-668d6bf9bc-k84vb" Jul 15 23:13:51.873310 kubelet[3286]: I0715 23:13:51.871877 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d017922f-d4f0-435f-a351-9384dd832f62-config-volume\") pod \"coredns-668d6bf9bc-94gb2\" (UID: \"d017922f-d4f0-435f-a351-9384dd832f62\") " pod="kube-system/coredns-668d6bf9bc-94gb2" Jul 15 23:13:51.873738 kubelet[3286]: I0715 23:13:51.871914 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxb4\" (UniqueName: \"kubernetes.io/projected/113c6c0a-e718-45fb-a6c8-5faeddafbff6-kube-api-access-fhxb4\") pod \"goldmane-768f4c5c69-bwq4x\" (UID: \"113c6c0a-e718-45fb-a6c8-5faeddafbff6\") " pod="calico-system/goldmane-768f4c5c69-bwq4x" Jul 15 23:13:51.873738 kubelet[3286]: I0715 23:13:51.871963 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sknq9\" (UniqueName: \"kubernetes.io/projected/15f3f713-eca6-4f48-a8cc-c3aba3011d1b-kube-api-access-sknq9\") pod \"calico-kube-controllers-59d8d4b544-wspv7\" (UID: \"15f3f713-eca6-4f48-a8cc-c3aba3011d1b\") " pod="calico-system/calico-kube-controllers-59d8d4b544-wspv7" Jul 15 23:13:51.873738 kubelet[3286]: I0715 23:13:51.871999 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr646\" (UniqueName: \"kubernetes.io/projected/4a7d591a-2c05-4fd1-a124-09131db809b2-kube-api-access-tr646\") pod \"coredns-668d6bf9bc-k84vb\" (UID: \"4a7d591a-2c05-4fd1-a124-09131db809b2\") " pod="kube-system/coredns-668d6bf9bc-k84vb" Jul 15 23:13:51.873738 kubelet[3286]: I0715 23:13:51.872041 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113c6c0a-e718-45fb-a6c8-5faeddafbff6-config\") pod \"goldmane-768f4c5c69-bwq4x\" (UID: \"113c6c0a-e718-45fb-a6c8-5faeddafbff6\") " pod="calico-system/goldmane-768f4c5c69-bwq4x" Jul 15 23:13:51.873738 kubelet[3286]: I0715 23:13:51.872084 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95h4s\" (UniqueName: \"kubernetes.io/projected/bdcee783-50df-4b4a-aa3b-c71fef356049-kube-api-access-95h4s\") pod \"calico-apiserver-5b6b6cb6bb-xlx2p\" (UID: \"bdcee783-50df-4b4a-aa3b-c71fef356049\") " pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-xlx2p" Jul 15 23:13:51.874080 kubelet[3286]: I0715 23:13:51.872131 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfd2\" (UniqueName: \"kubernetes.io/projected/d017922f-d4f0-435f-a351-9384dd832f62-kube-api-access-jvfd2\") pod \"coredns-668d6bf9bc-94gb2\" (UID: \"d017922f-d4f0-435f-a351-9384dd832f62\") " pod="kube-system/coredns-668d6bf9bc-94gb2" Jul 15 23:13:51.874080 kubelet[3286]: I0715 23:13:51.872167 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113c6c0a-e718-45fb-a6c8-5faeddafbff6-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-bwq4x\" (UID: \"113c6c0a-e718-45fb-a6c8-5faeddafbff6\") " pod="calico-system/goldmane-768f4c5c69-bwq4x" Jul 15 23:13:51.874080 kubelet[3286]: I0715 23:13:51.872212 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15f3f713-eca6-4f48-a8cc-c3aba3011d1b-tigera-ca-bundle\") pod \"calico-kube-controllers-59d8d4b544-wspv7\" (UID: \"15f3f713-eca6-4f48-a8cc-c3aba3011d1b\") " pod="calico-system/calico-kube-controllers-59d8d4b544-wspv7" Jul 15 23:13:51.874906 kubelet[3286]: I0715 23:13:51.872252 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fa3748b-a119-4d81-ba60-a82a1b8307c6-calico-apiserver-certs\") pod \"calico-apiserver-5b6b6cb6bb-jqf7f\" (UID: \"7fa3748b-a119-4d81-ba60-a82a1b8307c6\") " pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-jqf7f" Jul 15 23:13:51.904200 systemd[1]: Created slice kubepods-besteffort-podbdcee783_50df_4b4a_aa3b_c71fef356049.slice - libcontainer container kubepods-besteffort-podbdcee783_50df_4b4a_aa3b_c71fef356049.slice. Jul 15 23:13:51.928316 systemd[1]: Created slice kubepods-besteffort-pod113c6c0a_e718_45fb_a6c8_5faeddafbff6.slice - libcontainer container kubepods-besteffort-pod113c6c0a_e718_45fb_a6c8_5faeddafbff6.slice. Jul 15 23:13:51.947347 systemd[1]: Created slice kubepods-besteffort-podf19c8081_9f32_4a2d_b75d_e396954e5d7e.slice - libcontainer container kubepods-besteffort-podf19c8081_9f32_4a2d_b75d_e396954e5d7e.slice. Jul 15 23:13:51.979039 kubelet[3286]: I0715 23:13:51.978956 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-backend-key-pair\") pod \"whisker-fbf646d7f-xnpm6\" (UID: \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\") " pod="calico-system/whisker-fbf646d7f-xnpm6" Jul 15 23:13:51.979039 kubelet[3286]: I0715 23:13:51.979044 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9t6r\" (UniqueName: \"kubernetes.io/projected/f19c8081-9f32-4a2d-b75d-e396954e5d7e-kube-api-access-m9t6r\") pod \"whisker-fbf646d7f-xnpm6\" (UID: \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\") " pod="calico-system/whisker-fbf646d7f-xnpm6" Jul 15 23:13:51.980159 kubelet[3286]: I0715 23:13:51.980037 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-ca-bundle\") pod \"whisker-fbf646d7f-xnpm6\" (UID: \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\") " pod="calico-system/whisker-fbf646d7f-xnpm6" Jul 15 23:13:52.099618 containerd[1979]: time="2025-07-15T23:13:52.099544126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k84vb,Uid:4a7d591a-2c05-4fd1-a124-09131db809b2,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:52.157736 containerd[1979]: time="2025-07-15T23:13:52.157510054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-jqf7f,Uid:7fa3748b-a119-4d81-ba60-a82a1b8307c6,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:13:52.189487 containerd[1979]: time="2025-07-15T23:13:52.189247171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d8d4b544-wspv7,Uid:15f3f713-eca6-4f48-a8cc-c3aba3011d1b,Namespace:calico-system,Attempt:0,}" Jul 15 23:13:52.193530 systemd[1]: Created slice kubepods-besteffort-podda728a4b_58e6_4528_95e7_1b142a44e9e3.slice - libcontainer container kubepods-besteffort-podda728a4b_58e6_4528_95e7_1b142a44e9e3.slice. Jul 15 23:13:52.203659 containerd[1979]: time="2025-07-15T23:13:52.203201313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g96k,Uid:da728a4b-58e6-4528-95e7-1b142a44e9e3,Namespace:calico-system,Attempt:0,}" Jul 15 23:13:52.216764 containerd[1979]: time="2025-07-15T23:13:52.216589216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-xlx2p,Uid:bdcee783-50df-4b4a-aa3b-c71fef356049,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:13:52.247259 containerd[1979]: time="2025-07-15T23:13:52.247191141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bwq4x,Uid:113c6c0a-e718-45fb-a6c8-5faeddafbff6,Namespace:calico-system,Attempt:0,}" Jul 15 23:13:52.261233 containerd[1979]: time="2025-07-15T23:13:52.261130395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fbf646d7f-xnpm6,Uid:f19c8081-9f32-4a2d-b75d-e396954e5d7e,Namespace:calico-system,Attempt:0,}" Jul 15 23:13:52.427347 containerd[1979]: time="2025-07-15T23:13:52.426910493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-94gb2,Uid:d017922f-d4f0-435f-a351-9384dd832f62,Namespace:kube-system,Attempt:0,}" Jul 15 23:13:52.595779 containerd[1979]: time="2025-07-15T23:13:52.595535293Z" level=error msg="Failed to destroy network for sandbox \"79e650fd451958fac8fae75ec23cc76ebceb7bd0a8b7ace5ebdc1bb86a2734e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.596842 containerd[1979]: time="2025-07-15T23:13:52.596684423Z" level=error msg="Failed to destroy network for sandbox \"909dfef956fea7a7b3a4e01235826ac8e244eb1cee5bdf2d1f48dc8ec902d3e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.600118 containerd[1979]: time="2025-07-15T23:13:52.599198335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:13:52.603254 containerd[1979]: time="2025-07-15T23:13:52.602480596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-jqf7f,Uid:7fa3748b-a119-4d81-ba60-a82a1b8307c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e650fd451958fac8fae75ec23cc76ebceb7bd0a8b7ace5ebdc1bb86a2734e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.609524 containerd[1979]: time="2025-07-15T23:13:52.605538728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-xlx2p,Uid:bdcee783-50df-4b4a-aa3b-c71fef356049,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"909dfef956fea7a7b3a4e01235826ac8e244eb1cee5bdf2d1f48dc8ec902d3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.617187 kubelet[3286]: E0715 23:13:52.615703 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"909dfef956fea7a7b3a4e01235826ac8e244eb1cee5bdf2d1f48dc8ec902d3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.617187 kubelet[3286]: E0715 23:13:52.615821 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"909dfef956fea7a7b3a4e01235826ac8e244eb1cee5bdf2d1f48dc8ec902d3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-xlx2p" Jul 15 23:13:52.617187 kubelet[3286]: E0715 23:13:52.615855 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"909dfef956fea7a7b3a4e01235826ac8e244eb1cee5bdf2d1f48dc8ec902d3e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-xlx2p" Jul 15 23:13:52.617727 kubelet[3286]: E0715 23:13:52.615922 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b6b6cb6bb-xlx2p_calico-apiserver(bdcee783-50df-4b4a-aa3b-c71fef356049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b6b6cb6bb-xlx2p_calico-apiserver(bdcee783-50df-4b4a-aa3b-c71fef356049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"909dfef956fea7a7b3a4e01235826ac8e244eb1cee5bdf2d1f48dc8ec902d3e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-xlx2p" podUID="bdcee783-50df-4b4a-aa3b-c71fef356049" Jul 15 23:13:52.617727 kubelet[3286]: E0715 23:13:52.615999 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e650fd451958fac8fae75ec23cc76ebceb7bd0a8b7ace5ebdc1bb86a2734e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.617727 kubelet[3286]: E0715 23:13:52.616040 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e650fd451958fac8fae75ec23cc76ebceb7bd0a8b7ace5ebdc1bb86a2734e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-jqf7f" Jul 15 23:13:52.617966 kubelet[3286]: E0715 23:13:52.616065 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e650fd451958fac8fae75ec23cc76ebceb7bd0a8b7ace5ebdc1bb86a2734e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-jqf7f" Jul 15 23:13:52.620196 kubelet[3286]: E0715 23:13:52.619978 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b6b6cb6bb-jqf7f_calico-apiserver(7fa3748b-a119-4d81-ba60-a82a1b8307c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b6b6cb6bb-jqf7f_calico-apiserver(7fa3748b-a119-4d81-ba60-a82a1b8307c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79e650fd451958fac8fae75ec23cc76ebceb7bd0a8b7ace5ebdc1bb86a2734e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-jqf7f" podUID="7fa3748b-a119-4d81-ba60-a82a1b8307c6" Jul 15 23:13:52.642235 containerd[1979]: time="2025-07-15T23:13:52.641833947Z" level=error msg="Failed to destroy network for sandbox \"b6a807944276cddfd480ad0f47a4de57b58b319c2923b94e38f0ecb11cf2a4c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.648377 containerd[1979]: time="2025-07-15T23:13:52.646035422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g96k,Uid:da728a4b-58e6-4528-95e7-1b142a44e9e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a807944276cddfd480ad0f47a4de57b58b319c2923b94e38f0ecb11cf2a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.650811 kubelet[3286]: E0715 23:13:52.650753 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a807944276cddfd480ad0f47a4de57b58b319c2923b94e38f0ecb11cf2a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.653127 kubelet[3286]: E0715 23:13:52.652603 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a807944276cddfd480ad0f47a4de57b58b319c2923b94e38f0ecb11cf2a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:52.653127 kubelet[3286]: E0715 23:13:52.652655 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6a807944276cddfd480ad0f47a4de57b58b319c2923b94e38f0ecb11cf2a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9g96k" Jul 15 23:13:52.653127 kubelet[3286]: E0715 23:13:52.652735 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9g96k_calico-system(da728a4b-58e6-4528-95e7-1b142a44e9e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9g96k_calico-system(da728a4b-58e6-4528-95e7-1b142a44e9e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6a807944276cddfd480ad0f47a4de57b58b319c2923b94e38f0ecb11cf2a4c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9g96k" podUID="da728a4b-58e6-4528-95e7-1b142a44e9e3" Jul 15 23:13:52.679358 containerd[1979]: time="2025-07-15T23:13:52.678895532Z" level=error msg="Failed to destroy network for sandbox \"df235cc264c1498f3a8caf09ea3b373ee5bc79ce75ca1aa8f4e60ac996e9eebf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.687051 containerd[1979]: time="2025-07-15T23:13:52.686963649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k84vb,Uid:4a7d591a-2c05-4fd1-a124-09131db809b2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df235cc264c1498f3a8caf09ea3b373ee5bc79ce75ca1aa8f4e60ac996e9eebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.688239 kubelet[3286]: E0715 23:13:52.687590 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df235cc264c1498f3a8caf09ea3b373ee5bc79ce75ca1aa8f4e60ac996e9eebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.688239 kubelet[3286]: E0715 23:13:52.687680 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df235cc264c1498f3a8caf09ea3b373ee5bc79ce75ca1aa8f4e60ac996e9eebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k84vb" Jul 15 23:13:52.688239 kubelet[3286]: E0715 23:13:52.687714 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df235cc264c1498f3a8caf09ea3b373ee5bc79ce75ca1aa8f4e60ac996e9eebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k84vb" Jul 15 23:13:52.688509 kubelet[3286]: E0715 23:13:52.687789 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k84vb_kube-system(4a7d591a-2c05-4fd1-a124-09131db809b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k84vb_kube-system(4a7d591a-2c05-4fd1-a124-09131db809b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df235cc264c1498f3a8caf09ea3b373ee5bc79ce75ca1aa8f4e60ac996e9eebf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k84vb" podUID="4a7d591a-2c05-4fd1-a124-09131db809b2" Jul 15 23:13:52.695637 containerd[1979]: time="2025-07-15T23:13:52.695427278Z" level=error msg="Failed to destroy network for sandbox \"fd67ae151ef328ace37b28e0ea28958b29e03c111d19607dff17e80dc3fdc85e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.697188 containerd[1979]: time="2025-07-15T23:13:52.697082125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d8d4b544-wspv7,Uid:15f3f713-eca6-4f48-a8cc-c3aba3011d1b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd67ae151ef328ace37b28e0ea28958b29e03c111d19607dff17e80dc3fdc85e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.699250 kubelet[3286]: E0715 23:13:52.698381 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd67ae151ef328ace37b28e0ea28958b29e03c111d19607dff17e80dc3fdc85e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.699250 kubelet[3286]: E0715 23:13:52.698543 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd67ae151ef328ace37b28e0ea28958b29e03c111d19607dff17e80dc3fdc85e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59d8d4b544-wspv7" Jul 15 23:13:52.699250 kubelet[3286]: E0715 23:13:52.698614 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd67ae151ef328ace37b28e0ea28958b29e03c111d19607dff17e80dc3fdc85e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59d8d4b544-wspv7" Jul 15 23:13:52.700190 kubelet[3286]: E0715 23:13:52.698736 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59d8d4b544-wspv7_calico-system(15f3f713-eca6-4f48-a8cc-c3aba3011d1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59d8d4b544-wspv7_calico-system(15f3f713-eca6-4f48-a8cc-c3aba3011d1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd67ae151ef328ace37b28e0ea28958b29e03c111d19607dff17e80dc3fdc85e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59d8d4b544-wspv7" podUID="15f3f713-eca6-4f48-a8cc-c3aba3011d1b" Jul 15 23:13:52.728574 containerd[1979]: time="2025-07-15T23:13:52.728497902Z" level=error msg="Failed to destroy network for sandbox \"ca6ec57d2ab15ea76771e39474668a6147b7709f8d52a9aff54b9a27f6b53dc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.733428 containerd[1979]: time="2025-07-15T23:13:52.733321936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fbf646d7f-xnpm6,Uid:f19c8081-9f32-4a2d-b75d-e396954e5d7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ec57d2ab15ea76771e39474668a6147b7709f8d52a9aff54b9a27f6b53dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.736219 kubelet[3286]: E0715 23:13:52.734723 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ec57d2ab15ea76771e39474668a6147b7709f8d52a9aff54b9a27f6b53dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.736219 kubelet[3286]: E0715 23:13:52.736153 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ec57d2ab15ea76771e39474668a6147b7709f8d52a9aff54b9a27f6b53dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fbf646d7f-xnpm6" Jul 15 23:13:52.738703 kubelet[3286]: E0715 23:13:52.736682 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca6ec57d2ab15ea76771e39474668a6147b7709f8d52a9aff54b9a27f6b53dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fbf646d7f-xnpm6" Jul 15 23:13:52.739602 kubelet[3286]: E0715 23:13:52.736820 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fbf646d7f-xnpm6_calico-system(f19c8081-9f32-4a2d-b75d-e396954e5d7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fbf646d7f-xnpm6_calico-system(f19c8081-9f32-4a2d-b75d-e396954e5d7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca6ec57d2ab15ea76771e39474668a6147b7709f8d52a9aff54b9a27f6b53dc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fbf646d7f-xnpm6" podUID="f19c8081-9f32-4a2d-b75d-e396954e5d7e" Jul 15 23:13:52.754402 systemd[1]: run-netns-cni\x2dc6a37cfa\x2df25b\x2dc965\x2d2a98\x2d0d1c8359a46c.mount: Deactivated successfully. Jul 15 23:13:52.776073 containerd[1979]: time="2025-07-15T23:13:52.776001286Z" level=error msg="Failed to destroy network for sandbox \"11c168cc5be01b100b85815929c9bca410ecf952af66967e6f4057f888706950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.781557 systemd[1]: run-netns-cni\x2dde181caf\x2d7877\x2d04dd\x2d89ce\x2d8d6eb731b85e.mount: Deactivated successfully. Jul 15 23:13:52.782759 containerd[1979]: time="2025-07-15T23:13:52.782678255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bwq4x,Uid:113c6c0a-e718-45fb-a6c8-5faeddafbff6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c168cc5be01b100b85815929c9bca410ecf952af66967e6f4057f888706950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.784376 kubelet[3286]: E0715 23:13:52.783085 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c168cc5be01b100b85815929c9bca410ecf952af66967e6f4057f888706950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.784376 kubelet[3286]: E0715 23:13:52.783184 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c168cc5be01b100b85815929c9bca410ecf952af66967e6f4057f888706950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-bwq4x" Jul 15 23:13:52.784376 kubelet[3286]: E0715 23:13:52.783220 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c168cc5be01b100b85815929c9bca410ecf952af66967e6f4057f888706950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-bwq4x" Jul 15 23:13:52.784607 kubelet[3286]: E0715 23:13:52.783591 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-bwq4x_calico-system(113c6c0a-e718-45fb-a6c8-5faeddafbff6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-bwq4x_calico-system(113c6c0a-e718-45fb-a6c8-5faeddafbff6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11c168cc5be01b100b85815929c9bca410ecf952af66967e6f4057f888706950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-bwq4x" podUID="113c6c0a-e718-45fb-a6c8-5faeddafbff6" Jul 15 23:13:52.811968 containerd[1979]: time="2025-07-15T23:13:52.811891206Z" level=error msg="Failed to destroy network for sandbox \"0018b3b3ecf4c0a895d0b87d4b92d1141bc266f6d13ccf7fac76db83107776d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.816143 containerd[1979]: time="2025-07-15T23:13:52.815493222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-94gb2,Uid:d017922f-d4f0-435f-a351-9384dd832f62,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0018b3b3ecf4c0a895d0b87d4b92d1141bc266f6d13ccf7fac76db83107776d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.816889 kubelet[3286]: E0715 23:13:52.816827 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0018b3b3ecf4c0a895d0b87d4b92d1141bc266f6d13ccf7fac76db83107776d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:13:52.816978 kubelet[3286]: E0715 23:13:52.816927 3286 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0018b3b3ecf4c0a895d0b87d4b92d1141bc266f6d13ccf7fac76db83107776d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-94gb2" Jul 15 23:13:52.816978 kubelet[3286]: E0715 23:13:52.816963 3286 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0018b3b3ecf4c0a895d0b87d4b92d1141bc266f6d13ccf7fac76db83107776d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-94gb2" Jul 15 23:13:52.817126 kubelet[3286]: E0715 23:13:52.817023 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-94gb2_kube-system(d017922f-d4f0-435f-a351-9384dd832f62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-94gb2_kube-system(d017922f-d4f0-435f-a351-9384dd832f62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0018b3b3ecf4c0a895d0b87d4b92d1141bc266f6d13ccf7fac76db83107776d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-94gb2" podUID="d017922f-d4f0-435f-a351-9384dd832f62" Jul 15 23:13:52.817511 systemd[1]: run-netns-cni\x2dd823c411\x2df46a\x2d4497\x2d020d\x2d82992097ddcf.mount: Deactivated successfully. Jul 15 23:13:58.765380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124625074.mount: Deactivated successfully. Jul 15 23:13:58.830358 containerd[1979]: time="2025-07-15T23:13:58.830238488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:58.833413 containerd[1979]: time="2025-07-15T23:13:58.833317568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 15 23:13:58.834640 containerd[1979]: time="2025-07-15T23:13:58.834568616Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:58.838912 containerd[1979]: time="2025-07-15T23:13:58.838823252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:13:58.840131 containerd[1979]: time="2025-07-15T23:13:58.839925584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.237239779s" Jul 15 23:13:58.840131 containerd[1979]: time="2025-07-15T23:13:58.839983964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 15 23:13:58.886311 containerd[1979]: time="2025-07-15T23:13:58.886130264Z" level=info msg="CreateContainer within sandbox \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:13:58.916251 containerd[1979]: time="2025-07-15T23:13:58.913540616Z" level=info msg="Container 7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:13:58.941515 containerd[1979]: time="2025-07-15T23:13:58.941437616Z" level=info msg="CreateContainer within sandbox \"3aef5e8bd1c456debede0f5385a6a878728cca02746dac71c09c02bb65cefe35\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\"" Jul 15 23:13:58.943497 containerd[1979]: time="2025-07-15T23:13:58.943417040Z" level=info msg="StartContainer for \"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\"" Jul 15 23:13:58.946701 containerd[1979]: time="2025-07-15T23:13:58.946573604Z" level=info msg="connecting to shim 7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1" address="unix:///run/containerd/s/cea08d3482979f11a51841da35d5fc318f2f1753e4889eecfa99395e055624c3" protocol=ttrpc version=3 Jul 15 23:13:58.988576 systemd[1]: Started cri-containerd-7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1.scope - libcontainer container 7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1. Jul 15 23:13:59.094308 containerd[1979]: time="2025-07-15T23:13:59.093803045Z" level=info msg="StartContainer for \"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" returns successfully" Jul 15 23:13:59.347166 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:13:59.347334 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:13:59.745466 kubelet[3286]: I0715 23:13:59.745351 3286 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9t6r\" (UniqueName: \"kubernetes.io/projected/f19c8081-9f32-4a2d-b75d-e396954e5d7e-kube-api-access-m9t6r\") pod \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\" (UID: \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\") " Jul 15 23:13:59.745466 kubelet[3286]: I0715 23:13:59.745453 3286 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-backend-key-pair\") pod \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\" (UID: \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\") " Jul 15 23:13:59.746110 kubelet[3286]: I0715 23:13:59.745499 3286 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-ca-bundle\") pod \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\" (UID: \"f19c8081-9f32-4a2d-b75d-e396954e5d7e\") " Jul 15 23:13:59.750598 kubelet[3286]: I0715 23:13:59.750488 3286 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f19c8081-9f32-4a2d-b75d-e396954e5d7e" (UID: "f19c8081-9f32-4a2d-b75d-e396954e5d7e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:13:59.751981 kubelet[3286]: I0715 23:13:59.751865 3286 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f19c8081-9f32-4a2d-b75d-e396954e5d7e" (UID: "f19c8081-9f32-4a2d-b75d-e396954e5d7e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:13:59.755379 kubelet[3286]: I0715 23:13:59.755304 3286 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19c8081-9f32-4a2d-b75d-e396954e5d7e-kube-api-access-m9t6r" (OuterVolumeSpecName: "kube-api-access-m9t6r") pod "f19c8081-9f32-4a2d-b75d-e396954e5d7e" (UID: "f19c8081-9f32-4a2d-b75d-e396954e5d7e"). InnerVolumeSpecName "kube-api-access-m9t6r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:13:59.766193 systemd[1]: var-lib-kubelet-pods-f19c8081\x2d9f32\x2d4a2d\x2db75d\x2de396954e5d7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm9t6r.mount: Deactivated successfully. Jul 15 23:13:59.768547 systemd[1]: var-lib-kubelet-pods-f19c8081\x2d9f32\x2d4a2d\x2db75d\x2de396954e5d7e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:13:59.847004 kubelet[3286]: I0715 23:13:59.846938 3286 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-backend-key-pair\") on node \"ip-172-31-27-40\" DevicePath \"\"" Jul 15 23:13:59.847467 kubelet[3286]: I0715 23:13:59.847121 3286 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f19c8081-9f32-4a2d-b75d-e396954e5d7e-whisker-ca-bundle\") on node \"ip-172-31-27-40\" DevicePath \"\"" Jul 15 23:13:59.847467 kubelet[3286]: I0715 23:13:59.847148 3286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m9t6r\" (UniqueName: \"kubernetes.io/projected/f19c8081-9f32-4a2d-b75d-e396954e5d7e-kube-api-access-m9t6r\") on node \"ip-172-31-27-40\" DevicePath \"\"" Jul 15 23:13:59.946958 systemd[1]: Removed slice kubepods-besteffort-podf19c8081_9f32_4a2d_b75d_e396954e5d7e.slice - libcontainer container kubepods-besteffort-podf19c8081_9f32_4a2d_b75d_e396954e5d7e.slice. Jul 15 23:13:59.975048 kubelet[3286]: I0715 23:13:59.974873 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h8ff6" podStartSLOduration=2.2758614870000002 podStartE2EDuration="17.974817033s" podCreationTimestamp="2025-07-15 23:13:42 +0000 UTC" firstStartedPulling="2025-07-15 23:13:43.142337978 +0000 UTC m=+32.234031794" lastFinishedPulling="2025-07-15 23:13:58.841293524 +0000 UTC m=+47.932987340" observedRunningTime="2025-07-15 23:13:59.664595456 +0000 UTC m=+48.756289284" watchObservedRunningTime="2025-07-15 23:13:59.974817033 +0000 UTC m=+49.066510849" Jul 15 23:14:00.062692 systemd[1]: Created slice kubepods-besteffort-podbe5d55b2_d49c_444d_937f_d481b62c9c10.slice - libcontainer container kubepods-besteffort-podbe5d55b2_d49c_444d_937f_d481b62c9c10.slice. Jul 15 23:14:00.149952 kubelet[3286]: I0715 23:14:00.149866 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be5d55b2-d49c-444d-937f-d481b62c9c10-whisker-ca-bundle\") pod \"whisker-848dcb55d5-qfxl8\" (UID: \"be5d55b2-d49c-444d-937f-d481b62c9c10\") " pod="calico-system/whisker-848dcb55d5-qfxl8" Jul 15 23:14:00.150111 kubelet[3286]: I0715 23:14:00.149968 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjqpk\" (UniqueName: \"kubernetes.io/projected/be5d55b2-d49c-444d-937f-d481b62c9c10-kube-api-access-sjqpk\") pod \"whisker-848dcb55d5-qfxl8\" (UID: \"be5d55b2-d49c-444d-937f-d481b62c9c10\") " pod="calico-system/whisker-848dcb55d5-qfxl8" Jul 15 23:14:00.150111 kubelet[3286]: I0715 23:14:00.150039 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be5d55b2-d49c-444d-937f-d481b62c9c10-whisker-backend-key-pair\") pod \"whisker-848dcb55d5-qfxl8\" (UID: \"be5d55b2-d49c-444d-937f-d481b62c9c10\") " pod="calico-system/whisker-848dcb55d5-qfxl8" Jul 15 23:14:00.372057 containerd[1979]: time="2025-07-15T23:14:00.371794987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848dcb55d5-qfxl8,Uid:be5d55b2-d49c-444d-937f-d481b62c9c10,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:00.708358 (udev-worker)[4579]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:14:00.711211 systemd-networkd[1808]: cali552d2cceccf: Link UP Jul 15 23:14:00.713550 systemd-networkd[1808]: cali552d2cceccf: Gained carrier Jul 15 23:14:00.747448 containerd[1979]: 2025-07-15 23:14:00.421 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:14:00.747448 containerd[1979]: 2025-07-15 23:14:00.525 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0 whisker-848dcb55d5- calico-system be5d55b2-d49c-444d-937f-d481b62c9c10 888 0 2025-07-15 23:14:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:848dcb55d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-27-40 whisker-848dcb55d5-qfxl8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali552d2cceccf [] [] }} ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-" Jul 15 23:14:00.747448 containerd[1979]: 2025-07-15 23:14:00.525 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.747448 containerd[1979]: 2025-07-15 23:14:00.605 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" HandleID="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Workload="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.606 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" HandleID="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Workload="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000602180), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-40", "pod":"whisker-848dcb55d5-qfxl8", "timestamp":"2025-07-15 23:14:00.605969973 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.606 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.606 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.606 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.623 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" host="ip-172-31-27-40" Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.637 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.647 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.650 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.656 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:00.747899 containerd[1979]: 2025-07-15 23:14:00.656 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" host="ip-172-31-27-40" Jul 15 23:14:00.749023 containerd[1979]: 2025-07-15 23:14:00.660 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845 Jul 15 23:14:00.749023 containerd[1979]: 2025-07-15 23:14:00.669 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" host="ip-172-31-27-40" Jul 15 23:14:00.749023 containerd[1979]: 2025-07-15 23:14:00.684 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.65/26] block=192.168.79.64/26 handle="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" host="ip-172-31-27-40" Jul 15 23:14:00.749023 containerd[1979]: 2025-07-15 23:14:00.684 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.65/26] handle="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" host="ip-172-31-27-40" Jul 15 23:14:00.749023 containerd[1979]: 2025-07-15 23:14:00.684 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:00.749023 containerd[1979]: 2025-07-15 23:14:00.684 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.65/26] IPv6=[] ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" HandleID="k8s-pod-network.ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Workload="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.749454 containerd[1979]: 2025-07-15 23:14:00.694 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0", GenerateName:"whisker-848dcb55d5-", Namespace:"calico-system", SelfLink:"", UID:"be5d55b2-d49c-444d-937f-d481b62c9c10", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"848dcb55d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"whisker-848dcb55d5-qfxl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali552d2cceccf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:00.749454 containerd[1979]: 2025-07-15 23:14:00.694 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.65/32] ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.749725 containerd[1979]: 2025-07-15 23:14:00.694 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali552d2cceccf ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.749725 containerd[1979]: 2025-07-15 23:14:00.712 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.749889 containerd[1979]: 2025-07-15 23:14:00.712 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0", GenerateName:"whisker-848dcb55d5-", Namespace:"calico-system", SelfLink:"", UID:"be5d55b2-d49c-444d-937f-d481b62c9c10", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"848dcb55d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845", Pod:"whisker-848dcb55d5-qfxl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali552d2cceccf", MAC:"e6:d6:db:da:dc:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:00.750063 containerd[1979]: 2025-07-15 23:14:00.742 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" Namespace="calico-system" Pod="whisker-848dcb55d5-qfxl8" WorkloadEndpoint="ip--172--31--27--40-k8s-whisker--848dcb55d5--qfxl8-eth0" Jul 15 23:14:00.812452 containerd[1979]: time="2025-07-15T23:14:00.812357506Z" level=info msg="connecting to shim ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845" address="unix:///run/containerd/s/903c089193375358d136e465de5a363d2ae093d171b8715e254aa9c28dec8c46" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:00.854191 containerd[1979]: time="2025-07-15T23:14:00.854123782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" id:\"45c397b1b3cbeec828c466790f63bc7f0e5254c1144fb271a7fbebf619340d2b\" pid:4638 exit_status:1 exited_at:{seconds:1752621240 nanos:853446250}" Jul 15 23:14:00.888586 systemd[1]: Started cri-containerd-ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845.scope - libcontainer container ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845. Jul 15 23:14:00.961417 containerd[1979]: time="2025-07-15T23:14:00.961173898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848dcb55d5-qfxl8,Uid:be5d55b2-d49c-444d-937f-d481b62c9c10,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845\"" Jul 15 23:14:00.965349 containerd[1979]: time="2025-07-15T23:14:00.965112814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:14:01.183383 kubelet[3286]: I0715 23:14:01.183245 3286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19c8081-9f32-4a2d-b75d-e396954e5d7e" path="/var/lib/kubelet/pods/f19c8081-9f32-4a2d-b75d-e396954e5d7e/volumes" Jul 15 23:14:02.063734 containerd[1979]: time="2025-07-15T23:14:02.063671000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" id:\"93fd8fb0cca588e521710a9b4ac4aa75d071e3d06da2ef9d812769ccf6904c12\" pid:4812 exit_status:1 exited_at:{seconds:1752621242 nanos:62705372}" Jul 15 23:14:02.082107 systemd-networkd[1808]: cali552d2cceccf: Gained IPv6LL Jul 15 23:14:02.447484 systemd-networkd[1808]: vxlan.calico: Link UP Jul 15 23:14:02.448725 systemd-networkd[1808]: vxlan.calico: Gained carrier Jul 15 23:14:02.516699 containerd[1979]: time="2025-07-15T23:14:02.515748898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:02.522971 (udev-worker)[4583]: Network interface NamePolicy= disabled on kernel command line. Jul 15 23:14:02.525047 containerd[1979]: time="2025-07-15T23:14:02.524977474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 15 23:14:02.526325 containerd[1979]: time="2025-07-15T23:14:02.526022422Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:02.536185 containerd[1979]: time="2025-07-15T23:14:02.536131690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:02.539007 containerd[1979]: time="2025-07-15T23:14:02.538857886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.572334928s" Jul 15 23:14:02.541541 containerd[1979]: time="2025-07-15T23:14:02.541491970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 15 23:14:02.551345 containerd[1979]: time="2025-07-15T23:14:02.551295658Z" level=info msg="CreateContainer within sandbox \"ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:14:02.585542 containerd[1979]: time="2025-07-15T23:14:02.585487966Z" level=info msg="Container 6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:02.603176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055865610.mount: Deactivated successfully. Jul 15 23:14:02.618577 containerd[1979]: time="2025-07-15T23:14:02.618531587Z" level=info msg="CreateContainer within sandbox \"ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863\"" Jul 15 23:14:02.619850 containerd[1979]: time="2025-07-15T23:14:02.619795523Z" level=info msg="StartContainer for \"6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863\"" Jul 15 23:14:02.622989 containerd[1979]: time="2025-07-15T23:14:02.622927343Z" level=info msg="connecting to shim 6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863" address="unix:///run/containerd/s/903c089193375358d136e465de5a363d2ae093d171b8715e254aa9c28dec8c46" protocol=ttrpc version=3 Jul 15 23:14:02.685929 systemd[1]: Started cri-containerd-6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863.scope - libcontainer container 6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863. Jul 15 23:14:02.806576 containerd[1979]: time="2025-07-15T23:14:02.806511900Z" level=info msg="StartContainer for \"6e9841bab18c24202ff5b383488c8799cf9476c0523663c6894b31b22450b863\" returns successfully" Jul 15 23:14:02.811296 containerd[1979]: time="2025-07-15T23:14:02.810513636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:14:03.175715 containerd[1979]: time="2025-07-15T23:14:03.174983085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k84vb,Uid:4a7d591a-2c05-4fd1-a124-09131db809b2,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:03.390192 systemd-networkd[1808]: cali9d12322ba68: Link UP Jul 15 23:14:03.394510 systemd-networkd[1808]: cali9d12322ba68: Gained carrier Jul 15 23:14:03.423647 containerd[1979]: 2025-07-15 23:14:03.261 [INFO][4957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0 coredns-668d6bf9bc- kube-system 4a7d591a-2c05-4fd1-a124-09131db809b2 814 0 2025-07-15 23:13:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-40 coredns-668d6bf9bc-k84vb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d12322ba68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-" Jul 15 23:14:03.423647 containerd[1979]: 2025-07-15 23:14:03.261 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.423647 containerd[1979]: 2025-07-15 23:14:03.315 [INFO][4968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" HandleID="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Workload="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.315 [INFO][4968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" HandleID="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Workload="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-40", "pod":"coredns-668d6bf9bc-k84vb", "timestamp":"2025-07-15 23:14:03.315044098 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.315 [INFO][4968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.315 [INFO][4968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.315 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.329 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" host="ip-172-31-27-40" Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.337 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.346 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.349 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.353 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:03.423971 containerd[1979]: 2025-07-15 23:14:03.353 [INFO][4968] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" host="ip-172-31-27-40" Jul 15 23:14:03.427778 containerd[1979]: 2025-07-15 23:14:03.356 [INFO][4968] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87 Jul 15 23:14:03.427778 containerd[1979]: 2025-07-15 23:14:03.363 [INFO][4968] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" host="ip-172-31-27-40" Jul 15 23:14:03.427778 containerd[1979]: 2025-07-15 23:14:03.375 [INFO][4968] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.66/26] block=192.168.79.64/26 handle="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" host="ip-172-31-27-40" Jul 15 23:14:03.427778 containerd[1979]: 2025-07-15 23:14:03.376 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.66/26] handle="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" host="ip-172-31-27-40" Jul 15 23:14:03.427778 containerd[1979]: 2025-07-15 23:14:03.376 [INFO][4968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:03.427778 containerd[1979]: 2025-07-15 23:14:03.376 [INFO][4968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.66/26] IPv6=[] ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" HandleID="k8s-pod-network.7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Workload="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.430105 containerd[1979]: 2025-07-15 23:14:03.380 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a7d591a-2c05-4fd1-a124-09131db809b2", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"coredns-668d6bf9bc-k84vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d12322ba68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:03.430105 containerd[1979]: 2025-07-15 23:14:03.381 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.66/32] ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.430105 containerd[1979]: 2025-07-15 23:14:03.381 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d12322ba68 ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.430105 containerd[1979]: 2025-07-15 23:14:03.395 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.430105 containerd[1979]: 2025-07-15 23:14:03.397 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a7d591a-2c05-4fd1-a124-09131db809b2", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87", Pod:"coredns-668d6bf9bc-k84vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d12322ba68", MAC:"9a:67:f7:2e:74:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:03.430105 containerd[1979]: 2025-07-15 23:14:03.415 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" Namespace="kube-system" Pod="coredns-668d6bf9bc-k84vb" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--k84vb-eth0" Jul 15 23:14:03.492441 containerd[1979]: time="2025-07-15T23:14:03.491015015Z" level=info msg="connecting to shim 7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87" address="unix:///run/containerd/s/4c44e398fba0ed95dfd66574f159fb2d94e1fa9741a935dc31f1696c9d20663c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:03.540578 systemd[1]: Started cri-containerd-7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87.scope - libcontainer container 7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87. Jul 15 23:14:03.626206 containerd[1979]: time="2025-07-15T23:14:03.626068848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k84vb,Uid:4a7d591a-2c05-4fd1-a124-09131db809b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87\"" Jul 15 23:14:03.631556 containerd[1979]: time="2025-07-15T23:14:03.631486872Z" level=info msg="CreateContainer within sandbox \"7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:14:03.657325 containerd[1979]: time="2025-07-15T23:14:03.655647204Z" level=info msg="Container f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:03.662224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140328823.mount: Deactivated successfully. Jul 15 23:14:03.685015 containerd[1979]: time="2025-07-15T23:14:03.684834252Z" level=info msg="CreateContainer within sandbox \"7312cafe709c12b2cf383fbb9d8a34e4e1c3904512a33ccdadc3509cbf953b87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7\"" Jul 15 23:14:03.687291 containerd[1979]: time="2025-07-15T23:14:03.686940540Z" level=info msg="StartContainer for \"f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7\"" Jul 15 23:14:03.690260 containerd[1979]: time="2025-07-15T23:14:03.690189432Z" level=info msg="connecting to shim f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7" address="unix:///run/containerd/s/4c44e398fba0ed95dfd66574f159fb2d94e1fa9741a935dc31f1696c9d20663c" protocol=ttrpc version=3 Jul 15 23:14:03.742921 systemd[1]: Started cri-containerd-f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7.scope - libcontainer container f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7. Jul 15 23:14:03.810291 containerd[1979]: time="2025-07-15T23:14:03.808870272Z" level=info msg="StartContainer for \"f599f285fa8fd3d2e8fda20ee92cc8163903326fb17ba588ea11eb19ae8216b7\" returns successfully" Jul 15 23:14:04.321651 systemd-networkd[1808]: vxlan.calico: Gained IPv6LL Jul 15 23:14:04.778503 kubelet[3286]: I0715 23:14:04.776613 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k84vb" podStartSLOduration=48.776584357 podStartE2EDuration="48.776584357s" podCreationTimestamp="2025-07-15 23:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:04.727472821 +0000 UTC m=+53.819166661" watchObservedRunningTime="2025-07-15 23:14:04.776584357 +0000 UTC m=+53.868278161" Jul 15 23:14:04.834751 systemd-networkd[1808]: cali9d12322ba68: Gained IPv6LL Jul 15 23:14:04.937963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount908416344.mount: Deactivated successfully. Jul 15 23:14:04.959902 containerd[1979]: time="2025-07-15T23:14:04.959817134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:04.961607 containerd[1979]: time="2025-07-15T23:14:04.961520618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 15 23:14:04.963118 containerd[1979]: time="2025-07-15T23:14:04.962933330Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:04.966784 containerd[1979]: time="2025-07-15T23:14:04.966698102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:04.968405 containerd[1979]: time="2025-07-15T23:14:04.968193230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.156669758s" Jul 15 23:14:04.968405 containerd[1979]: time="2025-07-15T23:14:04.968243630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 15 23:14:04.973106 containerd[1979]: time="2025-07-15T23:14:04.973040366Z" level=info msg="CreateContainer within sandbox \"ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:14:04.986613 containerd[1979]: time="2025-07-15T23:14:04.986525546Z" level=info msg="Container 56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:05.003168 containerd[1979]: time="2025-07-15T23:14:05.003024262Z" level=info msg="CreateContainer within sandbox \"ea53c741f142c086ebc0464e63ddf379b26b05dcc77099b1300ace223a67e845\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68\"" Jul 15 23:14:05.005413 containerd[1979]: time="2025-07-15T23:14:05.004455142Z" level=info msg="StartContainer for \"56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68\"" Jul 15 23:14:05.007963 containerd[1979]: time="2025-07-15T23:14:05.007897582Z" level=info msg="connecting to shim 56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68" address="unix:///run/containerd/s/903c089193375358d136e465de5a363d2ae093d171b8715e254aa9c28dec8c46" protocol=ttrpc version=3 Jul 15 23:14:05.055603 systemd[1]: Started cri-containerd-56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68.scope - libcontainer container 56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68. Jul 15 23:14:05.164648 containerd[1979]: time="2025-07-15T23:14:05.164575415Z" level=info msg="StartContainer for \"56e4f672c3f8cd22ebdaea1176fd330b2f25f41fb05b92af85e43ebee0920f68\" returns successfully" Jul 15 23:14:05.177927 containerd[1979]: time="2025-07-15T23:14:05.177813755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d8d4b544-wspv7,Uid:15f3f713-eca6-4f48-a8cc-c3aba3011d1b,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:05.507537 systemd-networkd[1808]: cali4685294dd41: Link UP Jul 15 23:14:05.511139 systemd-networkd[1808]: cali4685294dd41: Gained carrier Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.284 [INFO][5108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0 calico-kube-controllers-59d8d4b544- calico-system 15f3f713-eca6-4f48-a8cc-c3aba3011d1b 820 0 2025-07-15 23:13:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59d8d4b544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-40 calico-kube-controllers-59d8d4b544-wspv7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4685294dd41 [] [] }} ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.285 [INFO][5108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.397 [INFO][5119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" HandleID="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Workload="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.398 [INFO][5119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" HandleID="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Workload="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030dbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-40", "pod":"calico-kube-controllers-59d8d4b544-wspv7", "timestamp":"2025-07-15 23:14:05.397746276 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.398 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.398 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.399 [INFO][5119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.421 [INFO][5119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.432 [INFO][5119] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.442 [INFO][5119] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.448 [INFO][5119] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.453 [INFO][5119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.453 [INFO][5119] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.458 [INFO][5119] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.470 [INFO][5119] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.489 [INFO][5119] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.67/26] block=192.168.79.64/26 handle="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.489 [INFO][5119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.67/26] handle="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" host="ip-172-31-27-40" Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.489 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:05.563620 containerd[1979]: 2025-07-15 23:14:05.489 [INFO][5119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.67/26] IPv6=[] ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" HandleID="k8s-pod-network.a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Workload="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.565585 containerd[1979]: 2025-07-15 23:14:05.493 [INFO][5108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0", GenerateName:"calico-kube-controllers-59d8d4b544-", Namespace:"calico-system", SelfLink:"", UID:"15f3f713-eca6-4f48-a8cc-c3aba3011d1b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59d8d4b544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"calico-kube-controllers-59d8d4b544-wspv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4685294dd41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:05.565585 containerd[1979]: 2025-07-15 23:14:05.494 [INFO][5108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.67/32] ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.565585 containerd[1979]: 2025-07-15 23:14:05.494 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4685294dd41 ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.565585 containerd[1979]: 2025-07-15 23:14:05.513 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.565585 containerd[1979]: 2025-07-15 23:14:05.515 [INFO][5108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0", GenerateName:"calico-kube-controllers-59d8d4b544-", Namespace:"calico-system", SelfLink:"", UID:"15f3f713-eca6-4f48-a8cc-c3aba3011d1b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59d8d4b544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac", Pod:"calico-kube-controllers-59d8d4b544-wspv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4685294dd41", MAC:"ca:51:1a:07:d0:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:05.565585 containerd[1979]: 2025-07-15 23:14:05.553 [INFO][5108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" Namespace="calico-system" Pod="calico-kube-controllers-59d8d4b544-wspv7" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--kube--controllers--59d8d4b544--wspv7-eth0" Jul 15 23:14:05.637052 containerd[1979]: time="2025-07-15T23:14:05.636990494Z" level=info msg="connecting to shim a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac" address="unix:///run/containerd/s/676e065a39ffa2180343b821fef62fb4834f1dac3719c366d55fdb657adfe9a7" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:05.716676 systemd[1]: Started cri-containerd-a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac.scope - libcontainer container a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac. Jul 15 23:14:05.736934 kubelet[3286]: I0715 23:14:05.736738 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-848dcb55d5-qfxl8" podStartSLOduration=1.731353174 podStartE2EDuration="5.736689854s" podCreationTimestamp="2025-07-15 23:14:00 +0000 UTC" firstStartedPulling="2025-07-15 23:14:00.964592854 +0000 UTC m=+50.056286658" lastFinishedPulling="2025-07-15 23:14:04.969929522 +0000 UTC m=+54.061623338" observedRunningTime="2025-07-15 23:14:05.736318598 +0000 UTC m=+54.828012438" watchObservedRunningTime="2025-07-15 23:14:05.736689854 +0000 UTC m=+54.828383670" Jul 15 23:14:06.036616 containerd[1979]: time="2025-07-15T23:14:06.036492612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59d8d4b544-wspv7,Uid:15f3f713-eca6-4f48-a8cc-c3aba3011d1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac\"" Jul 15 23:14:06.042763 containerd[1979]: time="2025-07-15T23:14:06.040860840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:14:06.176303 containerd[1979]: time="2025-07-15T23:14:06.176157636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g96k,Uid:da728a4b-58e6-4528-95e7-1b142a44e9e3,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:06.401576 systemd-networkd[1808]: calie78972ce905: Link UP Jul 15 23:14:06.402018 systemd-networkd[1808]: calie78972ce905: Gained carrier Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.247 [INFO][5185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0 csi-node-driver- calico-system da728a4b-58e6-4528-95e7-1b142a44e9e3 688 0 2025-07-15 23:13:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-40 csi-node-driver-9g96k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie78972ce905 [] [] }} ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.248 [INFO][5185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.312 [INFO][5198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" HandleID="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Workload="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.313 [INFO][5198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" HandleID="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Workload="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-40", "pod":"csi-node-driver-9g96k", "timestamp":"2025-07-15 23:14:06.312804409 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.313 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.313 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.313 [INFO][5198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.337 [INFO][5198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.345 [INFO][5198] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.354 [INFO][5198] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.357 [INFO][5198] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.361 [INFO][5198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.361 [INFO][5198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.364 [INFO][5198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.375 [INFO][5198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.389 [INFO][5198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.68/26] block=192.168.79.64/26 handle="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.389 [INFO][5198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.68/26] handle="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" host="ip-172-31-27-40" Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.389 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:06.434411 containerd[1979]: 2025-07-15 23:14:06.389 [INFO][5198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.68/26] IPv6=[] ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" HandleID="k8s-pod-network.99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Workload="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.437502 containerd[1979]: 2025-07-15 23:14:06.395 [INFO][5185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da728a4b-58e6-4528-95e7-1b142a44e9e3", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"csi-node-driver-9g96k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie78972ce905", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:06.437502 containerd[1979]: 2025-07-15 23:14:06.395 [INFO][5185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.68/32] ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.437502 containerd[1979]: 2025-07-15 23:14:06.395 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie78972ce905 ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.437502 containerd[1979]: 2025-07-15 23:14:06.402 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.437502 containerd[1979]: 2025-07-15 23:14:06.403 [INFO][5185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da728a4b-58e6-4528-95e7-1b142a44e9e3", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d", Pod:"csi-node-driver-9g96k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie78972ce905", MAC:"4e:af:12:44:f6:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:06.437502 containerd[1979]: 2025-07-15 23:14:06.429 [INFO][5185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" Namespace="calico-system" Pod="csi-node-driver-9g96k" WorkloadEndpoint="ip--172--31--27--40-k8s-csi--node--driver--9g96k-eth0" Jul 15 23:14:06.475640 containerd[1979]: time="2025-07-15T23:14:06.475523498Z" level=info msg="connecting to shim 99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d" address="unix:///run/containerd/s/95df0dfb02e283971031ea9b8effb6f2697b54466ba15e93572459cf438dd5f8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:06.537888 systemd[1]: Started cri-containerd-99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d.scope - libcontainer container 99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d. Jul 15 23:14:06.597443 containerd[1979]: time="2025-07-15T23:14:06.597368702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9g96k,Uid:da728a4b-58e6-4528-95e7-1b142a44e9e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d\"" Jul 15 23:14:06.625547 systemd-networkd[1808]: cali4685294dd41: Gained IPv6LL Jul 15 23:14:07.176570 containerd[1979]: time="2025-07-15T23:14:07.176484625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-94gb2,Uid:d017922f-d4f0-435f-a351-9384dd832f62,Namespace:kube-system,Attempt:0,}" Jul 15 23:14:07.177109 containerd[1979]: time="2025-07-15T23:14:07.177054313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-xlx2p,Uid:bdcee783-50df-4b4a-aa3b-c71fef356049,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:14:07.515498 systemd-networkd[1808]: calibd1115e1084: Link UP Jul 15 23:14:07.516107 systemd-networkd[1808]: calibd1115e1084: Gained carrier Jul 15 23:14:07.521542 systemd-networkd[1808]: calie78972ce905: Gained IPv6LL Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.308 [INFO][5264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0 calico-apiserver-5b6b6cb6bb- calico-apiserver bdcee783-50df-4b4a-aa3b-c71fef356049 822 0 2025-07-15 23:13:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b6b6cb6bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-40 calico-apiserver-5b6b6cb6bb-xlx2p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd1115e1084 [] [] }} ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.309 [INFO][5264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.422 [INFO][5284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" HandleID="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Workload="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.422 [INFO][5284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" HandleID="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Workload="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000325bd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-40", "pod":"calico-apiserver-5b6b6cb6bb-xlx2p", "timestamp":"2025-07-15 23:14:07.422362154 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.422 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.422 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.422 [INFO][5284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.452 [INFO][5284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.460 [INFO][5284] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.468 [INFO][5284] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.472 [INFO][5284] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.477 [INFO][5284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.477 [INFO][5284] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.480 [INFO][5284] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96 Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.486 [INFO][5284] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.502 [INFO][5284] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.69/26] block=192.168.79.64/26 handle="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.502 [INFO][5284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.69/26] handle="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" host="ip-172-31-27-40" Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.502 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:07.562084 containerd[1979]: 2025-07-15 23:14:07.502 [INFO][5284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.69/26] IPv6=[] ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" HandleID="k8s-pod-network.a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Workload="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.565799 containerd[1979]: 2025-07-15 23:14:07.507 [INFO][5264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0", GenerateName:"calico-apiserver-5b6b6cb6bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdcee783-50df-4b4a-aa3b-c71fef356049", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6b6cb6bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"calico-apiserver-5b6b6cb6bb-xlx2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd1115e1084", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:07.565799 containerd[1979]: 2025-07-15 23:14:07.507 [INFO][5264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.69/32] ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.565799 containerd[1979]: 2025-07-15 23:14:07.508 [INFO][5264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd1115e1084 ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.565799 containerd[1979]: 2025-07-15 23:14:07.517 [INFO][5264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.565799 containerd[1979]: 2025-07-15 23:14:07.518 [INFO][5264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0", GenerateName:"calico-apiserver-5b6b6cb6bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdcee783-50df-4b4a-aa3b-c71fef356049", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6b6cb6bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96", Pod:"calico-apiserver-5b6b6cb6bb-xlx2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd1115e1084", MAC:"ea:9d:bd:4f:4d:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:07.565799 containerd[1979]: 2025-07-15 23:14:07.547 [INFO][5264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-xlx2p" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--xlx2p-eth0" Jul 15 23:14:07.645127 containerd[1979]: time="2025-07-15T23:14:07.644438584Z" level=info msg="connecting to shim a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96" address="unix:///run/containerd/s/a21ca7345081879de324cf553f9f08eb6b6200b8701b876e73a42e91e942d10f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:07.723487 systemd-networkd[1808]: calid0efc4cf3e0: Link UP Jul 15 23:14:07.725955 systemd-networkd[1808]: calid0efc4cf3e0: Gained carrier Jul 15 23:14:07.760726 systemd[1]: Started cri-containerd-a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96.scope - libcontainer container a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96. Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.320 [INFO][5260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0 coredns-668d6bf9bc- kube-system d017922f-d4f0-435f-a351-9384dd832f62 823 0 2025-07-15 23:13:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-40 coredns-668d6bf9bc-94gb2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid0efc4cf3e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.321 [INFO][5260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.426 [INFO][5289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" HandleID="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Workload="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.431 [INFO][5289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" HandleID="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Workload="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-40", "pod":"coredns-668d6bf9bc-94gb2", "timestamp":"2025-07-15 23:14:07.426031934 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.431 [INFO][5289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.502 [INFO][5289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.502 [INFO][5289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.557 [INFO][5289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.571 [INFO][5289] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.592 [INFO][5289] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.603 [INFO][5289] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.617 [INFO][5289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.618 [INFO][5289] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.639 [INFO][5289] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827 Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.652 [INFO][5289] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.680 [INFO][5289] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.70/26] block=192.168.79.64/26 handle="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.681 [INFO][5289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.70/26] handle="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" host="ip-172-31-27-40" Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.682 [INFO][5289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:07.786879 containerd[1979]: 2025-07-15 23:14:07.682 [INFO][5289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.70/26] IPv6=[] ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" HandleID="k8s-pod-network.4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Workload="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.790257 containerd[1979]: 2025-07-15 23:14:07.703 [INFO][5260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d017922f-d4f0-435f-a351-9384dd832f62", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"coredns-668d6bf9bc-94gb2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0efc4cf3e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:07.790257 containerd[1979]: 2025-07-15 23:14:07.705 [INFO][5260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.70/32] ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.790257 containerd[1979]: 2025-07-15 23:14:07.706 [INFO][5260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0efc4cf3e0 ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.790257 containerd[1979]: 2025-07-15 23:14:07.735 [INFO][5260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.790257 containerd[1979]: 2025-07-15 23:14:07.740 [INFO][5260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d017922f-d4f0-435f-a351-9384dd832f62", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827", Pod:"coredns-668d6bf9bc-94gb2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0efc4cf3e0", MAC:"e6:35:67:72:1e:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:07.790257 containerd[1979]: 2025-07-15 23:14:07.779 [INFO][5260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" Namespace="kube-system" Pod="coredns-668d6bf9bc-94gb2" WorkloadEndpoint="ip--172--31--27--40-k8s-coredns--668d6bf9bc--94gb2-eth0" Jul 15 23:14:07.854002 containerd[1979]: time="2025-07-15T23:14:07.853907969Z" level=info msg="connecting to shim 4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827" address="unix:///run/containerd/s/981ebd4a542ee495432ed0f26a72190f7e13f1447913895f63fa86b386279266" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:07.931767 systemd[1]: Started cri-containerd-4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827.scope - libcontainer container 4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827. Jul 15 23:14:07.938670 containerd[1979]: time="2025-07-15T23:14:07.938550497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-xlx2p,Uid:bdcee783-50df-4b4a-aa3b-c71fef356049,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96\"" Jul 15 23:14:08.036177 containerd[1979]: time="2025-07-15T23:14:08.036113401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-94gb2,Uid:d017922f-d4f0-435f-a351-9384dd832f62,Namespace:kube-system,Attempt:0,} returns sandbox id \"4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827\"" Jul 15 23:14:08.044703 containerd[1979]: time="2025-07-15T23:14:08.044570738Z" level=info msg="CreateContainer within sandbox \"4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:14:08.067227 containerd[1979]: time="2025-07-15T23:14:08.067160066Z" level=info msg="Container 609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:08.079947 containerd[1979]: time="2025-07-15T23:14:08.079793678Z" level=info msg="CreateContainer within sandbox \"4166512bc95d913b5809a887e1ea5e8ba8e5175285d2abd0ca930574083bf827\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76\"" Jul 15 23:14:08.081208 containerd[1979]: time="2025-07-15T23:14:08.081145022Z" level=info msg="StartContainer for \"609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76\"" Jul 15 23:14:08.083312 containerd[1979]: time="2025-07-15T23:14:08.083211278Z" level=info msg="connecting to shim 609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76" address="unix:///run/containerd/s/981ebd4a542ee495432ed0f26a72190f7e13f1447913895f63fa86b386279266" protocol=ttrpc version=3 Jul 15 23:14:08.118586 systemd[1]: Started cri-containerd-609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76.scope - libcontainer container 609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76. Jul 15 23:14:08.176609 containerd[1979]: time="2025-07-15T23:14:08.176548106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bwq4x,Uid:113c6c0a-e718-45fb-a6c8-5faeddafbff6,Namespace:calico-system,Attempt:0,}" Jul 15 23:14:08.178379 containerd[1979]: time="2025-07-15T23:14:08.178061474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-jqf7f,Uid:7fa3748b-a119-4d81-ba60-a82a1b8307c6,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:14:08.240441 containerd[1979]: time="2025-07-15T23:14:08.239813474Z" level=info msg="StartContainer for \"609ae8a144e9fdfba8a7371634a945ccd26c31bf1fcddfd3753237e978d3ad76\" returns successfully" Jul 15 23:14:08.614064 systemd-networkd[1808]: cali49d69d38431: Link UP Jul 15 23:14:08.616287 systemd-networkd[1808]: cali49d69d38431: Gained carrier Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.382 [INFO][5446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0 goldmane-768f4c5c69- calico-system 113c6c0a-e718-45fb-a6c8-5faeddafbff6 825 0 2025-07-15 23:13:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-27-40 goldmane-768f4c5c69-bwq4x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali49d69d38431 [] [] }} ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.383 [INFO][5446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.492 [INFO][5474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" HandleID="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Workload="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.492 [INFO][5474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" HandleID="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Workload="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3020), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-40", "pod":"goldmane-768f4c5c69-bwq4x", "timestamp":"2025-07-15 23:14:08.49202002 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.493 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.493 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.493 [INFO][5474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.531 [INFO][5474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.543 [INFO][5474] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.559 [INFO][5474] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.565 [INFO][5474] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.570 [INFO][5474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.570 [INFO][5474] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.574 [INFO][5474] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79 Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.585 [INFO][5474] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.597 [INFO][5474] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.71/26] block=192.168.79.64/26 handle="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.598 [INFO][5474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.71/26] handle="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" host="ip-172-31-27-40" Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.598 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:08.667453 containerd[1979]: 2025-07-15 23:14:08.598 [INFO][5474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.71/26] IPv6=[] ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" HandleID="k8s-pod-network.bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Workload="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.672906 containerd[1979]: 2025-07-15 23:14:08.606 [INFO][5446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"113c6c0a-e718-45fb-a6c8-5faeddafbff6", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"goldmane-768f4c5c69-bwq4x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali49d69d38431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:08.672906 containerd[1979]: 2025-07-15 23:14:08.607 [INFO][5446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.71/32] ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.672906 containerd[1979]: 2025-07-15 23:14:08.607 [INFO][5446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49d69d38431 ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.672906 containerd[1979]: 2025-07-15 23:14:08.618 [INFO][5446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.672906 containerd[1979]: 2025-07-15 23:14:08.619 [INFO][5446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"113c6c0a-e718-45fb-a6c8-5faeddafbff6", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79", Pod:"goldmane-768f4c5c69-bwq4x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali49d69d38431", MAC:"62:80:5a:52:96:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:08.672906 containerd[1979]: 2025-07-15 23:14:08.646 [INFO][5446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" Namespace="calico-system" Pod="goldmane-768f4c5c69-bwq4x" WorkloadEndpoint="ip--172--31--27--40-k8s-goldmane--768f4c5c69--bwq4x-eth0" Jul 15 23:14:08.754989 containerd[1979]: time="2025-07-15T23:14:08.753833873Z" level=info msg="connecting to shim bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79" address="unix:///run/containerd/s/8dd137ef3ff5e4475bab2f968ad3ce9651bb65c8602ae235f008aaaa13c2328d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:08.830184 kubelet[3286]: I0715 23:14:08.829861 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-94gb2" podStartSLOduration=52.829836581 podStartE2EDuration="52.829836581s" podCreationTimestamp="2025-07-15 23:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:14:08.829115177 +0000 UTC m=+57.920809017" watchObservedRunningTime="2025-07-15 23:14:08.829836581 +0000 UTC m=+57.921530397" Jul 15 23:14:08.883048 systemd-networkd[1808]: cali86eddb7f4c1: Link UP Jul 15 23:14:08.893864 systemd-networkd[1808]: cali86eddb7f4c1: Gained carrier Jul 15 23:14:08.899712 systemd[1]: Started cri-containerd-bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79.scope - libcontainer container bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79. Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.418 [INFO][5455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0 calico-apiserver-5b6b6cb6bb- calico-apiserver 7fa3748b-a119-4d81-ba60-a82a1b8307c6 821 0 2025-07-15 23:13:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b6b6cb6bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-40 calico-apiserver-5b6b6cb6bb-jqf7f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali86eddb7f4c1 [] [] }} ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.418 [INFO][5455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.531 [INFO][5480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" HandleID="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Workload="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.532 [INFO][5480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" HandleID="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Workload="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000f3060), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-40", "pod":"calico-apiserver-5b6b6cb6bb-jqf7f", "timestamp":"2025-07-15 23:14:08.531390268 +0000 UTC"}, Hostname:"ip-172-31-27-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.533 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.598 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.598 [INFO][5480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-40' Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.654 [INFO][5480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.685 [INFO][5480] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.700 [INFO][5480] ipam/ipam.go 511: Trying affinity for 192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.707 [INFO][5480] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.715 [INFO][5480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.735 [INFO][5480] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.769 [INFO][5480] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835 Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.793 [INFO][5480] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.842 [INFO][5480] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.72/26] block=192.168.79.64/26 handle="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.843 [INFO][5480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.72/26] handle="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" host="ip-172-31-27-40" Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.843 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:14:08.964685 containerd[1979]: 2025-07-15 23:14:08.843 [INFO][5480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.72/26] IPv6=[] ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" HandleID="k8s-pod-network.e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Workload="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:08.966244 containerd[1979]: 2025-07-15 23:14:08.858 [INFO][5455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0", GenerateName:"calico-apiserver-5b6b6cb6bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fa3748b-a119-4d81-ba60-a82a1b8307c6", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6b6cb6bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"", Pod:"calico-apiserver-5b6b6cb6bb-jqf7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86eddb7f4c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:08.966244 containerd[1979]: 2025-07-15 23:14:08.861 [INFO][5455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.72/32] ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:08.966244 containerd[1979]: 2025-07-15 23:14:08.862 [INFO][5455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86eddb7f4c1 ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:08.966244 containerd[1979]: 2025-07-15 23:14:08.909 [INFO][5455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:08.966244 containerd[1979]: 2025-07-15 23:14:08.910 [INFO][5455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0", GenerateName:"calico-apiserver-5b6b6cb6bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fa3748b-a119-4d81-ba60-a82a1b8307c6", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6b6cb6bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-40", ContainerID:"e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835", Pod:"calico-apiserver-5b6b6cb6bb-jqf7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86eddb7f4c1", MAC:"62:6e:14:fb:0e:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:14:08.966244 containerd[1979]: 2025-07-15 23:14:08.956 [INFO][5455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" Namespace="calico-apiserver" Pod="calico-apiserver-5b6b6cb6bb-jqf7f" WorkloadEndpoint="ip--172--31--27--40-k8s-calico--apiserver--5b6b6cb6bb--jqf7f-eth0" Jul 15 23:14:09.056956 containerd[1979]: time="2025-07-15T23:14:09.056521731Z" level=info msg="connecting to shim e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835" address="unix:///run/containerd/s/bf23d098313e62bb035c6696cbab1de9df4e3aec75830a06bb4bbe4286afe5f2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:14:09.121593 systemd-networkd[1808]: calid0efc4cf3e0: Gained IPv6LL Jul 15 23:14:09.174613 systemd[1]: Started cri-containerd-e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835.scope - libcontainer container e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835. Jul 15 23:14:09.188417 systemd-networkd[1808]: calibd1115e1084: Gained IPv6LL Jul 15 23:14:09.235928 containerd[1979]: time="2025-07-15T23:14:09.235858755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-bwq4x,Uid:113c6c0a-e718-45fb-a6c8-5faeddafbff6,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79\"" Jul 15 23:14:09.406136 containerd[1979]: time="2025-07-15T23:14:09.405910624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6b6cb6bb-jqf7f,Uid:7fa3748b-a119-4d81-ba60-a82a1b8307c6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835\"" Jul 15 23:14:09.697585 systemd-networkd[1808]: cali49d69d38431: Gained IPv6LL Jul 15 23:14:10.594723 systemd-networkd[1808]: cali86eddb7f4c1: Gained IPv6LL Jul 15 23:14:11.006285 containerd[1979]: time="2025-07-15T23:14:11.006208876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:11.008158 containerd[1979]: time="2025-07-15T23:14:11.008099932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 15 23:14:11.011173 containerd[1979]: time="2025-07-15T23:14:11.011054716Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:11.015873 containerd[1979]: time="2025-07-15T23:14:11.015639172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:11.016829 containerd[1979]: time="2025-07-15T23:14:11.016678756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 4.975759224s" Jul 15 23:14:11.016829 containerd[1979]: time="2025-07-15T23:14:11.016734712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 15 23:14:11.019670 containerd[1979]: time="2025-07-15T23:14:11.019556560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:14:11.040295 containerd[1979]: time="2025-07-15T23:14:11.038636812Z" level=info msg="CreateContainer within sandbox \"a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:14:11.067289 containerd[1979]: time="2025-07-15T23:14:11.065820209Z" level=info msg="Container 50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:11.077173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259411424.mount: Deactivated successfully. Jul 15 23:14:11.096284 containerd[1979]: time="2025-07-15T23:14:11.096204053Z" level=info msg="CreateContainer within sandbox \"a4ff97b647e832dd98a09b0025a5a295c29ebc44d24cef6bf109d5807f661eac\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\"" Jul 15 23:14:11.097650 containerd[1979]: time="2025-07-15T23:14:11.097505837Z" level=info msg="StartContainer for \"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\"" Jul 15 23:14:11.101753 containerd[1979]: time="2025-07-15T23:14:11.101629577Z" level=info msg="connecting to shim 50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2" address="unix:///run/containerd/s/676e065a39ffa2180343b821fef62fb4834f1dac3719c366d55fdb657adfe9a7" protocol=ttrpc version=3 Jul 15 23:14:11.188573 systemd[1]: Started cri-containerd-50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2.scope - libcontainer container 50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2. Jul 15 23:14:11.280516 containerd[1979]: time="2025-07-15T23:14:11.280121382Z" level=info msg="StartContainer for \"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" returns successfully" Jul 15 23:14:11.911250 containerd[1979]: time="2025-07-15T23:14:11.911191173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" id:\"69e1a777708ce11440fd157a8b8a7dbd8325bacb30b7f4b1f28b36be5aa96d08\" pid:5661 exited_at:{seconds:1752621251 nanos:910014801}" Jul 15 23:14:11.941911 kubelet[3286]: I0715 23:14:11.941792 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59d8d4b544-wspv7" podStartSLOduration=24.962901425 podStartE2EDuration="29.941768193s" podCreationTimestamp="2025-07-15 23:13:42 +0000 UTC" firstStartedPulling="2025-07-15 23:14:06.04015416 +0000 UTC m=+55.131847976" lastFinishedPulling="2025-07-15 23:14:11.019020856 +0000 UTC m=+60.110714744" observedRunningTime="2025-07-15 23:14:11.83399912 +0000 UTC m=+60.925693020" watchObservedRunningTime="2025-07-15 23:14:11.941768193 +0000 UTC m=+61.033462009" Jul 15 23:14:12.678526 ntpd[1937]: Listen normally on 6 vxlan.calico 192.168.79.64:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 6 vxlan.calico 192.168.79.64:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 7 cali552d2cceccf [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 8 vxlan.calico [fe80::646d:6bff:fed5:a113%5]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 9 cali9d12322ba68 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 10 cali4685294dd41 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 11 calie78972ce905 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 12 calibd1115e1084 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 13 calid0efc4cf3e0 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:14:12.679453 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 14 cali49d69d38431 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:14:12.678648 ntpd[1937]: Listen normally on 7 cali552d2cceccf [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:14:12.680369 ntpd[1937]: 15 Jul 23:14:12 ntpd[1937]: Listen normally on 15 cali86eddb7f4c1 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:14:12.678727 ntpd[1937]: Listen normally on 8 vxlan.calico [fe80::646d:6bff:fed5:a113%5]:123 Jul 15 23:14:12.678792 ntpd[1937]: Listen normally on 9 cali9d12322ba68 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:14:12.678963 ntpd[1937]: Listen normally on 10 cali4685294dd41 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:14:12.679038 ntpd[1937]: Listen normally on 11 calie78972ce905 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:14:12.679106 ntpd[1937]: Listen normally on 12 calibd1115e1084 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:14:12.679359 ntpd[1937]: Listen normally on 13 calid0efc4cf3e0 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:14:12.679433 ntpd[1937]: Listen normally on 14 cali49d69d38431 [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:14:12.679507 ntpd[1937]: Listen normally on 15 cali86eddb7f4c1 [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:14:13.196524 containerd[1979]: time="2025-07-15T23:14:13.196443283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:13.199237 containerd[1979]: time="2025-07-15T23:14:13.199185475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 15 23:14:13.201767 containerd[1979]: time="2025-07-15T23:14:13.201717259Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:13.207170 containerd[1979]: time="2025-07-15T23:14:13.207098875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:13.209685 containerd[1979]: time="2025-07-15T23:14:13.209583139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.189966255s" Jul 15 23:14:13.209685 containerd[1979]: time="2025-07-15T23:14:13.209634595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 15 23:14:13.211869 containerd[1979]: time="2025-07-15T23:14:13.211522543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:14:13.215902 containerd[1979]: time="2025-07-15T23:14:13.215762275Z" level=info msg="CreateContainer within sandbox \"99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:14:13.239213 containerd[1979]: time="2025-07-15T23:14:13.236415679Z" level=info msg="Container 7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:13.261107 containerd[1979]: time="2025-07-15T23:14:13.261051799Z" level=info msg="CreateContainer within sandbox \"99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa\"" Jul 15 23:14:13.264061 containerd[1979]: time="2025-07-15T23:14:13.263973187Z" level=info msg="StartContainer for \"7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa\"" Jul 15 23:14:13.271686 containerd[1979]: time="2025-07-15T23:14:13.271441519Z" level=info msg="connecting to shim 7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa" address="unix:///run/containerd/s/95df0dfb02e283971031ea9b8effb6f2697b54466ba15e93572459cf438dd5f8" protocol=ttrpc version=3 Jul 15 23:14:13.328655 systemd[1]: Started cri-containerd-7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa.scope - libcontainer container 7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa. Jul 15 23:14:13.493732 containerd[1979]: time="2025-07-15T23:14:13.493656597Z" level=info msg="StartContainer for \"7a9ccc868273a8d1dff457409386bad6b754cd6f02a99a22b2df25af359539aa\" returns successfully" Jul 15 23:14:15.047093 systemd[1]: Started sshd@7-172.31.27.40:22-139.178.89.65:55290.service - OpenSSH per-connection server daemon (139.178.89.65:55290). Jul 15 23:14:15.332439 sshd[5715]: Accepted publickey for core from 139.178.89.65 port 55290 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:15.337994 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:15.351789 systemd-logind[1944]: New session 8 of user core. Jul 15 23:14:15.359747 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:14:15.755759 sshd[5720]: Connection closed by 139.178.89.65 port 55290 Jul 15 23:14:15.756444 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:15.765141 systemd[1]: sshd@7-172.31.27.40:22-139.178.89.65:55290.service: Deactivated successfully. Jul 15 23:14:15.773358 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:14:15.777666 systemd-logind[1944]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:14:15.782740 systemd-logind[1944]: Removed session 8. Jul 15 23:14:16.044386 containerd[1979]: time="2025-07-15T23:14:16.043945677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:16.047725 containerd[1979]: time="2025-07-15T23:14:16.047653737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 15 23:14:16.048609 containerd[1979]: time="2025-07-15T23:14:16.048566337Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:16.061614 containerd[1979]: time="2025-07-15T23:14:16.061555653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:16.064156 containerd[1979]: time="2025-07-15T23:14:16.063995589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.852413874s" Jul 15 23:14:16.064156 containerd[1979]: time="2025-07-15T23:14:16.064063173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:14:16.069282 containerd[1979]: time="2025-07-15T23:14:16.069121617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:14:16.073346 containerd[1979]: time="2025-07-15T23:14:16.072997233Z" level=info msg="CreateContainer within sandbox \"a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:14:16.115408 containerd[1979]: time="2025-07-15T23:14:16.115355230Z" level=info msg="Container 8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:16.147792 containerd[1979]: time="2025-07-15T23:14:16.147737362Z" level=info msg="CreateContainer within sandbox \"a422cfc38fbcb53e90934217353ee1bc702caefe06020abc08a557a65824fb96\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11\"" Jul 15 23:14:16.150848 containerd[1979]: time="2025-07-15T23:14:16.149168482Z" level=info msg="StartContainer for \"8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11\"" Jul 15 23:14:16.154354 containerd[1979]: time="2025-07-15T23:14:16.154298986Z" level=info msg="connecting to shim 8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11" address="unix:///run/containerd/s/a21ca7345081879de324cf553f9f08eb6b6200b8701b876e73a42e91e942d10f" protocol=ttrpc version=3 Jul 15 23:14:16.202593 systemd[1]: Started cri-containerd-8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11.scope - libcontainer container 8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11. Jul 15 23:14:16.284350 containerd[1979]: time="2025-07-15T23:14:16.284189986Z" level=info msg="StartContainer for \"8b18ab1d1b21561c56d5cf6f59977c34feb33800cd7ea1b00a353851c1159a11\" returns successfully" Jul 15 23:14:17.827104 kubelet[3286]: I0715 23:14:17.827055 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:18.807450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651680299.mount: Deactivated successfully. Jul 15 23:14:19.918539 containerd[1979]: time="2025-07-15T23:14:19.918408425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:19.921330 containerd[1979]: time="2025-07-15T23:14:19.920886101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 15 23:14:19.924672 containerd[1979]: time="2025-07-15T23:14:19.924524249Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:19.932859 containerd[1979]: time="2025-07-15T23:14:19.932758457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:19.936425 containerd[1979]: time="2025-07-15T23:14:19.936303497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.86673734s" Jul 15 23:14:19.936425 containerd[1979]: time="2025-07-15T23:14:19.936370973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 15 23:14:19.941787 containerd[1979]: time="2025-07-15T23:14:19.941725625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:14:19.945861 containerd[1979]: time="2025-07-15T23:14:19.945717485Z" level=info msg="CreateContainer within sandbox \"bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:14:19.966731 containerd[1979]: time="2025-07-15T23:14:19.966677969Z" level=info msg="Container f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:19.984230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760814465.mount: Deactivated successfully. Jul 15 23:14:20.008781 containerd[1979]: time="2025-07-15T23:14:20.007834489Z" level=info msg="CreateContainer within sandbox \"bc1263cbee5ba57434e01c45cead07afe013606cf272e1436f0efec4e36dbb79\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\"" Jul 15 23:14:20.010669 containerd[1979]: time="2025-07-15T23:14:20.010610533Z" level=info msg="StartContainer for \"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\"" Jul 15 23:14:20.015094 containerd[1979]: time="2025-07-15T23:14:20.014897041Z" level=info msg="connecting to shim f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9" address="unix:///run/containerd/s/8dd137ef3ff5e4475bab2f968ad3ce9651bb65c8602ae235f008aaaa13c2328d" protocol=ttrpc version=3 Jul 15 23:14:20.067874 systemd[1]: Started cri-containerd-f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9.scope - libcontainer container f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9. Jul 15 23:14:20.208003 containerd[1979]: time="2025-07-15T23:14:20.206637602Z" level=info msg="StartContainer for \"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" returns successfully" Jul 15 23:14:20.350967 containerd[1979]: time="2025-07-15T23:14:20.349708623Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:20.351870 containerd[1979]: time="2025-07-15T23:14:20.351818511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:14:20.360684 containerd[1979]: time="2025-07-15T23:14:20.360496395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 418.490654ms" Jul 15 23:14:20.361732 containerd[1979]: time="2025-07-15T23:14:20.361679367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 15 23:14:20.363897 containerd[1979]: time="2025-07-15T23:14:20.363850047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:14:20.370072 containerd[1979]: time="2025-07-15T23:14:20.370021863Z" level=info msg="CreateContainer within sandbox \"e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:14:20.389666 containerd[1979]: time="2025-07-15T23:14:20.389613411Z" level=info msg="Container ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:20.416214 containerd[1979]: time="2025-07-15T23:14:20.416137971Z" level=info msg="CreateContainer within sandbox \"e67543301cb4744f4e5e1c2577ad9caf91f90096061382f108ba6c738e92d835\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe\"" Jul 15 23:14:20.420559 containerd[1979]: time="2025-07-15T23:14:20.420336519Z" level=info msg="StartContainer for \"ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe\"" Jul 15 23:14:20.426106 containerd[1979]: time="2025-07-15T23:14:20.426016851Z" level=info msg="connecting to shim ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe" address="unix:///run/containerd/s/bf23d098313e62bb035c6696cbab1de9df4e3aec75830a06bb4bbe4286afe5f2" protocol=ttrpc version=3 Jul 15 23:14:20.498710 systemd[1]: Started cri-containerd-ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe.scope - libcontainer container ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe. Jul 15 23:14:20.660865 containerd[1979]: time="2025-07-15T23:14:20.660786160Z" level=info msg="StartContainer for \"ca7c88a6e41b8599c9e01cde5533ad2b7014cb549c8adceda3e8c763c21b14fe\" returns successfully" Jul 15 23:14:20.796698 systemd[1]: Started sshd@8-172.31.27.40:22-139.178.89.65:50788.service - OpenSSH per-connection server daemon (139.178.89.65:50788). Jul 15 23:14:20.910498 kubelet[3286]: I0715 23:14:20.910160 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-xlx2p" podStartSLOduration=39.794142848999996 podStartE2EDuration="47.909877385s" podCreationTimestamp="2025-07-15 23:13:33 +0000 UTC" firstStartedPulling="2025-07-15 23:14:07.951993245 +0000 UTC m=+57.043687061" lastFinishedPulling="2025-07-15 23:14:16.067727781 +0000 UTC m=+65.159421597" observedRunningTime="2025-07-15 23:14:16.864343705 +0000 UTC m=+65.956037545" watchObservedRunningTime="2025-07-15 23:14:20.909877385 +0000 UTC m=+70.001571225" Jul 15 23:14:20.913644 kubelet[3286]: I0715 23:14:20.913023 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b6b6cb6bb-jqf7f" podStartSLOduration=35.960584454 podStartE2EDuration="46.912676469s" podCreationTimestamp="2025-07-15 23:13:34 +0000 UTC" firstStartedPulling="2025-07-15 23:14:09.411544684 +0000 UTC m=+58.503238488" lastFinishedPulling="2025-07-15 23:14:20.363636687 +0000 UTC m=+69.455330503" observedRunningTime="2025-07-15 23:14:20.894078701 +0000 UTC m=+69.985772601" watchObservedRunningTime="2025-07-15 23:14:20.912676469 +0000 UTC m=+70.004370297" Jul 15 23:14:21.028718 sshd[5860]: Accepted publickey for core from 139.178.89.65 port 50788 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:21.032621 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:21.048490 systemd-logind[1944]: New session 9 of user core. Jul 15 23:14:21.055642 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:14:21.200713 containerd[1979]: time="2025-07-15T23:14:21.200626083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"d1c00804d1d843793984359fe0dbc26afdc299f7ea9eff0e120606ab030a5e5b\" pid:5876 exit_status:1 exited_at:{seconds:1752621261 nanos:199960191}" Jul 15 23:14:21.384108 sshd[5890]: Connection closed by 139.178.89.65 port 50788 Jul 15 23:14:21.384998 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:21.397400 systemd[1]: sshd@8-172.31.27.40:22-139.178.89.65:50788.service: Deactivated successfully. Jul 15 23:14:21.403130 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:14:21.406042 systemd-logind[1944]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:14:21.411171 systemd-logind[1944]: Removed session 9. Jul 15 23:14:21.871976 kubelet[3286]: I0715 23:14:21.871782 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:22.047696 containerd[1979]: time="2025-07-15T23:14:22.047509131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"922992e2aa7ba13a8e0e71a07ff8b3e5fb951012d1a1df69c56b6e90b7d00e59\" pid:5916 exit_status:1 exited_at:{seconds:1752621262 nanos:47084139}" Jul 15 23:14:22.917810 containerd[1979]: time="2025-07-15T23:14:22.917633899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:22.920460 containerd[1979]: time="2025-07-15T23:14:22.920334199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 15 23:14:22.924099 containerd[1979]: time="2025-07-15T23:14:22.923982331Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:22.931145 containerd[1979]: time="2025-07-15T23:14:22.930949675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:14:22.933627 containerd[1979]: time="2025-07-15T23:14:22.933253519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.56915038s" Jul 15 23:14:22.933627 containerd[1979]: time="2025-07-15T23:14:22.933526471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 15 23:14:22.943603 containerd[1979]: time="2025-07-15T23:14:22.943468076Z" level=info msg="CreateContainer within sandbox \"99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:14:22.968680 containerd[1979]: time="2025-07-15T23:14:22.968612204Z" level=info msg="Container 9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:14:22.987912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738759924.mount: Deactivated successfully. Jul 15 23:14:23.001014 containerd[1979]: time="2025-07-15T23:14:23.000948112Z" level=info msg="CreateContainer within sandbox \"99a3e10e716c296b48b479b64ff4afaf1eee1bcb053d7901f98b8c2dd193956d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78\"" Jul 15 23:14:23.004868 containerd[1979]: time="2025-07-15T23:14:23.004799944Z" level=info msg="StartContainer for \"9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78\"" Jul 15 23:14:23.011508 containerd[1979]: time="2025-07-15T23:14:23.011338120Z" level=info msg="connecting to shim 9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78" address="unix:///run/containerd/s/95df0dfb02e283971031ea9b8effb6f2697b54466ba15e93572459cf438dd5f8" protocol=ttrpc version=3 Jul 15 23:14:23.061691 systemd[1]: Started cri-containerd-9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78.scope - libcontainer container 9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78. Jul 15 23:14:23.223049 containerd[1979]: time="2025-07-15T23:14:23.222778205Z" level=info msg="StartContainer for \"9b3c4ffa7259dc2e547ffefc1fde6af81ab4056283219797e4aa2e7cc04b3f78\" returns successfully" Jul 15 23:14:23.463413 kubelet[3286]: I0715 23:14:23.463364 3286 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:14:23.463413 kubelet[3286]: I0715 23:14:23.463416 3286 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:14:23.920880 kubelet[3286]: I0715 23:14:23.919814 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-bwq4x" podStartSLOduration=30.218615226 podStartE2EDuration="40.919788536s" podCreationTimestamp="2025-07-15 23:13:43 +0000 UTC" firstStartedPulling="2025-07-15 23:14:09.238997031 +0000 UTC m=+58.330690847" lastFinishedPulling="2025-07-15 23:14:19.940170329 +0000 UTC m=+69.031864157" observedRunningTime="2025-07-15 23:14:20.970875798 +0000 UTC m=+70.062569638" watchObservedRunningTime="2025-07-15 23:14:23.919788536 +0000 UTC m=+73.011482352" Jul 15 23:14:26.421155 systemd[1]: Started sshd@9-172.31.27.40:22-139.178.89.65:50794.service - OpenSSH per-connection server daemon (139.178.89.65:50794). Jul 15 23:14:26.623424 sshd[5971]: Accepted publickey for core from 139.178.89.65 port 50794 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:26.626778 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:26.636410 systemd-logind[1944]: New session 10 of user core. Jul 15 23:14:26.645538 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:14:26.923675 sshd[5973]: Connection closed by 139.178.89.65 port 50794 Jul 15 23:14:26.924188 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:26.934519 systemd[1]: sshd@9-172.31.27.40:22-139.178.89.65:50794.service: Deactivated successfully. Jul 15 23:14:26.942386 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:14:26.949409 systemd-logind[1944]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:14:26.969338 containerd[1979]: time="2025-07-15T23:14:26.968752488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" id:\"3cd9732a9ed4ba1e0b17edde60db98e33e347bfb2baf1e0eb898f4c8daa09af8\" pid:5996 exited_at:{seconds:1752621266 nanos:966077388}" Jul 15 23:14:26.973483 systemd[1]: Started sshd@10-172.31.27.40:22-139.178.89.65:50808.service - OpenSSH per-connection server daemon (139.178.89.65:50808). Jul 15 23:14:26.976513 systemd-logind[1944]: Removed session 10. Jul 15 23:14:27.169836 sshd[6010]: Accepted publickey for core from 139.178.89.65 port 50808 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:27.172385 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:27.184665 systemd-logind[1944]: New session 11 of user core. Jul 15 23:14:27.192585 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:14:27.553681 sshd[6012]: Connection closed by 139.178.89.65 port 50808 Jul 15 23:14:27.552868 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:27.566219 systemd[1]: sshd@10-172.31.27.40:22-139.178.89.65:50808.service: Deactivated successfully. Jul 15 23:14:27.572692 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:14:27.577811 systemd-logind[1944]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:14:27.601712 systemd[1]: Started sshd@11-172.31.27.40:22-139.178.89.65:50822.service - OpenSSH per-connection server daemon (139.178.89.65:50822). Jul 15 23:14:27.609708 systemd-logind[1944]: Removed session 11. Jul 15 23:14:27.797652 sshd[6022]: Accepted publickey for core from 139.178.89.65 port 50822 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:27.800211 sshd-session[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:27.808169 systemd-logind[1944]: New session 12 of user core. Jul 15 23:14:27.818560 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:14:28.074813 sshd[6024]: Connection closed by 139.178.89.65 port 50822 Jul 15 23:14:28.075775 sshd-session[6022]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:28.083074 systemd-logind[1944]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:14:28.083251 systemd[1]: sshd@11-172.31.27.40:22-139.178.89.65:50822.service: Deactivated successfully. Jul 15 23:14:28.087156 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:14:28.093355 systemd-logind[1944]: Removed session 12. Jul 15 23:14:31.790609 containerd[1979]: time="2025-07-15T23:14:31.790546011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" id:\"3c9d03faf5d316ac0d687418788ca7de957fa8ab27632402e03d549ca6f2d7d2\" pid:6048 exited_at:{seconds:1752621271 nanos:789992907}" Jul 15 23:14:31.828742 kubelet[3286]: I0715 23:14:31.827720 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9g96k" podStartSLOduration=33.488862478 podStartE2EDuration="49.82769674s" podCreationTimestamp="2025-07-15 23:13:42 +0000 UTC" firstStartedPulling="2025-07-15 23:14:06.599681678 +0000 UTC m=+55.691375494" lastFinishedPulling="2025-07-15 23:14:22.93851594 +0000 UTC m=+72.030209756" observedRunningTime="2025-07-15 23:14:23.922710788 +0000 UTC m=+73.014404652" watchObservedRunningTime="2025-07-15 23:14:31.82769674 +0000 UTC m=+80.919390556" Jul 15 23:14:33.115545 systemd[1]: Started sshd@12-172.31.27.40:22-139.178.89.65:42352.service - OpenSSH per-connection server daemon (139.178.89.65:42352). Jul 15 23:14:33.310947 sshd[6060]: Accepted publickey for core from 139.178.89.65 port 42352 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:33.314639 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:33.324161 systemd-logind[1944]: New session 13 of user core. Jul 15 23:14:33.331535 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:14:33.649954 sshd[6065]: Connection closed by 139.178.89.65 port 42352 Jul 15 23:14:33.651030 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:33.658009 systemd[1]: sshd@12-172.31.27.40:22-139.178.89.65:42352.service: Deactivated successfully. Jul 15 23:14:33.661799 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:14:33.666055 systemd-logind[1944]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:14:33.668751 systemd-logind[1944]: Removed session 13. Jul 15 23:14:33.770931 kubelet[3286]: I0715 23:14:33.770866 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:38.689029 systemd[1]: Started sshd@13-172.31.27.40:22-139.178.89.65:42362.service - OpenSSH per-connection server daemon (139.178.89.65:42362). Jul 15 23:14:38.885394 sshd[6080]: Accepted publickey for core from 139.178.89.65 port 42362 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:38.888609 sshd-session[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:38.903017 systemd-logind[1944]: New session 14 of user core. Jul 15 23:14:38.909608 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:14:39.269562 sshd[6082]: Connection closed by 139.178.89.65 port 42362 Jul 15 23:14:39.271674 sshd-session[6080]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:39.279956 systemd[1]: sshd@13-172.31.27.40:22-139.178.89.65:42362.service: Deactivated successfully. Jul 15 23:14:39.288549 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:14:39.291649 systemd-logind[1944]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:14:39.301453 systemd-logind[1944]: Removed session 14. Jul 15 23:14:41.847693 containerd[1979]: time="2025-07-15T23:14:41.847616353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" id:\"b30b63ce538440b9ca645686df6564687b8a4ab1091c6616ba94beb9ea0505a9\" pid:6105 exited_at:{seconds:1752621281 nanos:845281117}" Jul 15 23:14:44.320360 systemd[1]: Started sshd@14-172.31.27.40:22-139.178.89.65:53274.service - OpenSSH per-connection server daemon (139.178.89.65:53274). Jul 15 23:14:44.546410 sshd[6124]: Accepted publickey for core from 139.178.89.65 port 53274 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:44.549630 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:44.561679 systemd-logind[1944]: New session 15 of user core. Jul 15 23:14:44.568629 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:14:44.891305 sshd[6126]: Connection closed by 139.178.89.65 port 53274 Jul 15 23:14:44.890584 sshd-session[6124]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:44.898849 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:14:44.902511 systemd[1]: sshd@14-172.31.27.40:22-139.178.89.65:53274.service: Deactivated successfully. Jul 15 23:14:44.910399 systemd-logind[1944]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:14:44.916460 systemd-logind[1944]: Removed session 15. Jul 15 23:14:47.947317 kubelet[3286]: I0715 23:14:47.946427 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:14:49.934138 systemd[1]: Started sshd@15-172.31.27.40:22-139.178.89.65:51552.service - OpenSSH per-connection server daemon (139.178.89.65:51552). Jul 15 23:14:50.165930 sshd[6145]: Accepted publickey for core from 139.178.89.65 port 51552 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:50.170739 sshd-session[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:50.187390 systemd-logind[1944]: New session 16 of user core. Jul 15 23:14:50.193611 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:14:50.557369 sshd[6147]: Connection closed by 139.178.89.65 port 51552 Jul 15 23:14:50.558143 sshd-session[6145]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:50.565255 systemd[1]: sshd@15-172.31.27.40:22-139.178.89.65:51552.service: Deactivated successfully. Jul 15 23:14:50.570754 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:14:50.576644 systemd-logind[1944]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:14:50.606499 systemd[1]: Started sshd@16-172.31.27.40:22-139.178.89.65:51560.service - OpenSSH per-connection server daemon (139.178.89.65:51560). Jul 15 23:14:50.610476 systemd-logind[1944]: Removed session 16. Jul 15 23:14:50.811516 sshd[6159]: Accepted publickey for core from 139.178.89.65 port 51560 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:50.815518 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:50.830419 systemd-logind[1944]: New session 17 of user core. Jul 15 23:14:50.837570 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:14:51.597358 sshd[6161]: Connection closed by 139.178.89.65 port 51560 Jul 15 23:14:51.597929 sshd-session[6159]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:51.608669 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:14:51.612430 systemd[1]: sshd@16-172.31.27.40:22-139.178.89.65:51560.service: Deactivated successfully. Jul 15 23:14:51.622647 systemd-logind[1944]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:14:51.650196 systemd-logind[1944]: Removed session 17. Jul 15 23:14:51.654749 systemd[1]: Started sshd@17-172.31.27.40:22-139.178.89.65:51562.service - OpenSSH per-connection server daemon (139.178.89.65:51562). Jul 15 23:14:51.871573 sshd[6171]: Accepted publickey for core from 139.178.89.65 port 51562 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:51.878189 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:51.897923 systemd-logind[1944]: New session 18 of user core. Jul 15 23:14:51.907632 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:14:52.490972 containerd[1979]: time="2025-07-15T23:14:52.490865470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"3bb07a8e49d155a92971b2941966ff5fc7b52af5bf387ae1b247d7798539496d\" pid:6188 exited_at:{seconds:1752621292 nanos:490473550}" Jul 15 23:14:53.642227 sshd[6179]: Connection closed by 139.178.89.65 port 51562 Jul 15 23:14:53.643531 sshd-session[6171]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:53.656149 systemd[1]: sshd@17-172.31.27.40:22-139.178.89.65:51562.service: Deactivated successfully. Jul 15 23:14:53.666372 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:14:53.691743 systemd-logind[1944]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:14:53.697885 systemd[1]: Started sshd@18-172.31.27.40:22-139.178.89.65:51576.service - OpenSSH per-connection server daemon (139.178.89.65:51576). Jul 15 23:14:53.702819 systemd-logind[1944]: Removed session 18. Jul 15 23:14:53.939965 sshd[6219]: Accepted publickey for core from 139.178.89.65 port 51576 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:53.942069 sshd-session[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:53.957350 systemd-logind[1944]: New session 19 of user core. Jul 15 23:14:53.964006 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:14:54.592415 sshd[6221]: Connection closed by 139.178.89.65 port 51576 Jul 15 23:14:54.592884 sshd-session[6219]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:54.600604 systemd[1]: sshd@18-172.31.27.40:22-139.178.89.65:51576.service: Deactivated successfully. Jul 15 23:14:54.607771 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:14:54.611667 systemd-logind[1944]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:14:54.636698 systemd[1]: Started sshd@19-172.31.27.40:22-139.178.89.65:51578.service - OpenSSH per-connection server daemon (139.178.89.65:51578). Jul 15 23:14:54.641433 systemd-logind[1944]: Removed session 19. Jul 15 23:14:54.836877 sshd[6231]: Accepted publickey for core from 139.178.89.65 port 51578 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:14:54.842016 sshd-session[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:14:54.855620 systemd-logind[1944]: New session 20 of user core. Jul 15 23:14:54.864338 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:14:55.196393 sshd[6233]: Connection closed by 139.178.89.65 port 51578 Jul 15 23:14:55.196115 sshd-session[6231]: pam_unix(sshd:session): session closed for user core Jul 15 23:14:55.205289 systemd-logind[1944]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:14:55.207775 systemd[1]: sshd@19-172.31.27.40:22-139.178.89.65:51578.service: Deactivated successfully. Jul 15 23:14:55.217608 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:14:55.220805 systemd-logind[1944]: Removed session 20. Jul 15 23:15:00.241705 systemd[1]: Started sshd@20-172.31.27.40:22-139.178.89.65:40746.service - OpenSSH per-connection server daemon (139.178.89.65:40746). Jul 15 23:15:00.459038 sshd[6247]: Accepted publickey for core from 139.178.89.65 port 40746 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:00.462077 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:00.471845 systemd-logind[1944]: New session 21 of user core. Jul 15 23:15:00.481723 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:15:00.775238 sshd[6249]: Connection closed by 139.178.89.65 port 40746 Jul 15 23:15:00.776752 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:00.787987 systemd[1]: sshd@20-172.31.27.40:22-139.178.89.65:40746.service: Deactivated successfully. Jul 15 23:15:00.796503 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:15:00.800158 systemd-logind[1944]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:15:00.805786 systemd-logind[1944]: Removed session 21. Jul 15 23:15:01.968719 containerd[1979]: time="2025-07-15T23:15:01.968651025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"c61d2b94c51a8a4f1ee3ac71d4b772467a87ba9f116865325aa2f185e25bfeb7\" pid:6274 exited_at:{seconds:1752621301 nanos:966153249}" Jul 15 23:15:02.007022 containerd[1979]: time="2025-07-15T23:15:02.006948846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" id:\"35d9c260b5ab7810e9fac90302e61f93d4628d96f4840ee38e329ad1094af2d5\" pid:6293 exited_at:{seconds:1752621302 nanos:6550482}" Jul 15 23:15:05.828453 systemd[1]: Started sshd@21-172.31.27.40:22-139.178.89.65:40762.service - OpenSSH per-connection server daemon (139.178.89.65:40762). Jul 15 23:15:06.056463 sshd[6312]: Accepted publickey for core from 139.178.89.65 port 40762 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:06.061807 sshd-session[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:06.079281 systemd-logind[1944]: New session 22 of user core. Jul 15 23:15:06.084573 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:15:06.394368 sshd[6314]: Connection closed by 139.178.89.65 port 40762 Jul 15 23:15:06.395424 sshd-session[6312]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:06.405699 systemd[1]: sshd@21-172.31.27.40:22-139.178.89.65:40762.service: Deactivated successfully. Jul 15 23:15:06.411719 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:15:06.425744 systemd-logind[1944]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:15:06.432087 systemd-logind[1944]: Removed session 22. Jul 15 23:15:11.441703 systemd[1]: Started sshd@22-172.31.27.40:22-139.178.89.65:37832.service - OpenSSH per-connection server daemon (139.178.89.65:37832). Jul 15 23:15:11.658378 sshd[6328]: Accepted publickey for core from 139.178.89.65 port 37832 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:11.660668 sshd-session[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:11.673376 systemd-logind[1944]: New session 23 of user core. Jul 15 23:15:11.678652 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:15:11.882744 containerd[1979]: time="2025-07-15T23:15:11.882675163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" id:\"1764fe1ab5d1ed078c0f7e3905c6f39fbbdcda1de474cc42ba115ace6d5b6d1e\" pid:6350 exited_at:{seconds:1752621311 nanos:879550399}" Jul 15 23:15:12.026810 sshd[6330]: Connection closed by 139.178.89.65 port 37832 Jul 15 23:15:12.028582 sshd-session[6328]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:12.036809 systemd[1]: sshd@22-172.31.27.40:22-139.178.89.65:37832.service: Deactivated successfully. Jul 15 23:15:12.041119 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:15:12.046230 systemd-logind[1944]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:15:12.051856 systemd-logind[1944]: Removed session 23. Jul 15 23:15:17.071241 systemd[1]: Started sshd@23-172.31.27.40:22-139.178.89.65:37848.service - OpenSSH per-connection server daemon (139.178.89.65:37848). Jul 15 23:15:17.302250 sshd[6366]: Accepted publickey for core from 139.178.89.65 port 37848 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:17.308679 sshd-session[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:17.319763 systemd-logind[1944]: New session 24 of user core. Jul 15 23:15:17.325562 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:15:17.636407 sshd[6368]: Connection closed by 139.178.89.65 port 37848 Jul 15 23:15:17.634964 sshd-session[6366]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:17.644595 systemd[1]: sshd@23-172.31.27.40:22-139.178.89.65:37848.service: Deactivated successfully. Jul 15 23:15:17.650931 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:15:17.656533 systemd-logind[1944]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:15:17.661126 systemd-logind[1944]: Removed session 24. Jul 15 23:15:22.045433 containerd[1979]: time="2025-07-15T23:15:22.045354661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"9c99a0b7abc126db15d70a48fed666385bddbb4fe77ab1818ff990cb51ce715c\" pid:6394 exited_at:{seconds:1752621322 nanos:43902049}" Jul 15 23:15:22.678703 systemd[1]: Started sshd@24-172.31.27.40:22-139.178.89.65:48452.service - OpenSSH per-connection server daemon (139.178.89.65:48452). Jul 15 23:15:22.890069 sshd[6405]: Accepted publickey for core from 139.178.89.65 port 48452 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:22.895331 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:22.912684 systemd-logind[1944]: New session 25 of user core. Jul 15 23:15:22.915956 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:15:23.247420 sshd[6407]: Connection closed by 139.178.89.65 port 48452 Jul 15 23:15:23.247900 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:23.257642 systemd[1]: sshd@24-172.31.27.40:22-139.178.89.65:48452.service: Deactivated successfully. Jul 15 23:15:23.265462 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:15:23.269898 systemd-logind[1944]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:15:23.274480 systemd-logind[1944]: Removed session 25. Jul 15 23:15:26.946564 containerd[1979]: time="2025-07-15T23:15:26.946384545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" id:\"ec287fd4f917910273133ed5304b5af818656d12c4e2b714f05aec24749fe09b\" pid:6435 exited_at:{seconds:1752621326 nanos:945955773}" Jul 15 23:15:28.299189 systemd[1]: Started sshd@25-172.31.27.40:22-139.178.89.65:48456.service - OpenSSH per-connection server daemon (139.178.89.65:48456). Jul 15 23:15:28.523510 sshd[6445]: Accepted publickey for core from 139.178.89.65 port 48456 ssh2: RSA SHA256:+evBTQk7qNmgF4EZAZmwrjyij5eL1gJYC/XPiwkQ/E4 Jul 15 23:15:28.527259 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:15:28.540191 systemd-logind[1944]: New session 26 of user core. Jul 15 23:15:28.547195 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:15:28.826419 sshd[6447]: Connection closed by 139.178.89.65 port 48456 Jul 15 23:15:28.826130 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Jul 15 23:15:28.838151 systemd[1]: sshd@25-172.31.27.40:22-139.178.89.65:48456.service: Deactivated successfully. Jul 15 23:15:28.847928 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:15:28.851702 systemd-logind[1944]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:15:28.857574 systemd-logind[1944]: Removed session 26. Jul 15 23:15:31.965209 containerd[1979]: time="2025-07-15T23:15:31.963994994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" id:\"d0c21e0887a95d109c7f4dcbbfa3714ef3a8ef2350ad40ce8c29486de277f6cc\" pid:6472 exited_at:{seconds:1752621331 nanos:963532166}" Jul 15 23:15:32.886843 update_engine[1945]: I20250715 23:15:32.886471 1945 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 15 23:15:32.886843 update_engine[1945]: I20250715 23:15:32.886550 1945 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 15 23:15:32.887968 update_engine[1945]: I20250715 23:15:32.887914 1945 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 15 23:15:32.890420 update_engine[1945]: I20250715 23:15:32.890367 1945 omaha_request_params.cc:62] Current group set to alpha Jul 15 23:15:32.890788 update_engine[1945]: I20250715 23:15:32.890553 1945 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 15 23:15:32.890788 update_engine[1945]: I20250715 23:15:32.890575 1945 update_attempter.cc:643] Scheduling an action processor start. Jul 15 23:15:32.890788 update_engine[1945]: I20250715 23:15:32.890614 1945 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 15 23:15:32.890788 update_engine[1945]: I20250715 23:15:32.890677 1945 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 15 23:15:32.890998 update_engine[1945]: I20250715 23:15:32.890805 1945 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 15 23:15:32.890998 update_engine[1945]: I20250715 23:15:32.890830 1945 omaha_request_action.cc:272] Request: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: Jul 15 23:15:32.890998 update_engine[1945]: I20250715 23:15:32.890846 1945 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 23:15:32.907901 update_engine[1945]: I20250715 23:15:32.906967 1945 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 23:15:32.907901 update_engine[1945]: I20250715 23:15:32.907789 1945 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 23:15:32.913799 locksmithd[2015]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 15 23:15:32.939507 update_engine[1945]: E20250715 23:15:32.939445 1945 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 23:15:32.939799 update_engine[1945]: I20250715 23:15:32.939765 1945 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 15 23:15:41.850711 containerd[1979]: time="2025-07-15T23:15:41.850618547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50b8cc30f2d40aece3d573bdbbc1453fd1082cae6d2052a35c940d01dd9370e2\" id:\"38e0603658e799534d3a8a5f7df3baaa1e001722d19911bdfef23e6aa0f005dc\" pid:6520 exit_status:1 exited_at:{seconds:1752621341 nanos:849949259}" Jul 15 23:15:42.891559 update_engine[1945]: I20250715 23:15:42.891309 1945 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 23:15:42.892167 update_engine[1945]: I20250715 23:15:42.891664 1945 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 23:15:42.892167 update_engine[1945]: I20250715 23:15:42.892023 1945 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 23:15:42.893152 update_engine[1945]: E20250715 23:15:42.893033 1945 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 23:15:42.893293 update_engine[1945]: I20250715 23:15:42.893133 1945 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 15 23:15:43.008200 systemd[1]: cri-containerd-577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e.scope: Deactivated successfully. Jul 15 23:15:43.012811 systemd[1]: cri-containerd-577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e.scope: Consumed 30.029s CPU time, 114.3M memory peak, 944K read from disk. Jul 15 23:15:43.017523 containerd[1979]: time="2025-07-15T23:15:43.017306049Z" level=info msg="received exit event container_id:\"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\" id:\"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\" pid:3867 exit_status:1 exited_at:{seconds:1752621343 nanos:16640061}" Jul 15 23:15:43.018711 containerd[1979]: time="2025-07-15T23:15:43.018634737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\" id:\"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\" pid:3867 exit_status:1 exited_at:{seconds:1752621343 nanos:16640061}" Jul 15 23:15:43.058993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e-rootfs.mount: Deactivated successfully. Jul 15 23:15:43.232425 kubelet[3286]: I0715 23:15:43.231316 3286 scope.go:117] "RemoveContainer" containerID="577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e" Jul 15 23:15:43.249418 containerd[1979]: time="2025-07-15T23:15:43.249342142Z" level=info msg="CreateContainer within sandbox \"2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 15 23:15:43.268725 containerd[1979]: time="2025-07-15T23:15:43.268659239Z" level=info msg="Container 2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:43.281912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910800934.mount: Deactivated successfully. Jul 15 23:15:43.286438 containerd[1979]: time="2025-07-15T23:15:43.286317299Z" level=info msg="CreateContainer within sandbox \"2849a106320265e8801265fd772bffeb7e0b58403018ceb351be33a0c2a8511b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\"" Jul 15 23:15:43.287800 containerd[1979]: time="2025-07-15T23:15:43.287401715Z" level=info msg="StartContainer for \"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\"" Jul 15 23:15:43.289065 containerd[1979]: time="2025-07-15T23:15:43.288996971Z" level=info msg="connecting to shim 2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7" address="unix:///run/containerd/s/b55300d2e855e414d39ecb4d5d510f30262132432a28687904c2f04f563b434e" protocol=ttrpc version=3 Jul 15 23:15:43.338560 systemd[1]: Started cri-containerd-2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7.scope - libcontainer container 2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7. Jul 15 23:15:43.397374 containerd[1979]: time="2025-07-15T23:15:43.397300991Z" level=info msg="StartContainer for \"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\" returns successfully" Jul 15 23:15:43.683765 systemd[1]: cri-containerd-9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811.scope: Deactivated successfully. Jul 15 23:15:43.684353 systemd[1]: cri-containerd-9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811.scope: Consumed 5.172s CPU time, 62.6M memory peak, 64K read from disk. Jul 15 23:15:43.689043 containerd[1979]: time="2025-07-15T23:15:43.688088005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\" id:\"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\" pid:3102 exit_status:1 exited_at:{seconds:1752621343 nanos:686742589}" Jul 15 23:15:43.689043 containerd[1979]: time="2025-07-15T23:15:43.688095937Z" level=info msg="received exit event container_id:\"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\" id:\"9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811\" pid:3102 exit_status:1 exited_at:{seconds:1752621343 nanos:686742589}" Jul 15 23:15:43.748966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811-rootfs.mount: Deactivated successfully. Jul 15 23:15:44.235055 kubelet[3286]: I0715 23:15:44.234994 3286 scope.go:117] "RemoveContainer" containerID="9ae1c14eb3e5096408729a547f9197eb000f323c29771a963ec805ddce7a8811" Jul 15 23:15:44.239333 containerd[1979]: time="2025-07-15T23:15:44.239190275Z" level=info msg="CreateContainer within sandbox \"290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 15 23:15:44.260680 containerd[1979]: time="2025-07-15T23:15:44.260611235Z" level=info msg="Container 250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:44.280818 containerd[1979]: time="2025-07-15T23:15:44.280743504Z" level=info msg="CreateContainer within sandbox \"290ef57c7cebce6a1d45a9e3cc90890a96f66988dc740d6bb66bf55651a32551\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885\"" Jul 15 23:15:44.281689 containerd[1979]: time="2025-07-15T23:15:44.281632368Z" level=info msg="StartContainer for \"250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885\"" Jul 15 23:15:44.283934 containerd[1979]: time="2025-07-15T23:15:44.283863576Z" level=info msg="connecting to shim 250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885" address="unix:///run/containerd/s/28d6bbddb1c9637289c6b5650e1adad98a3dd694241d8d42d1070de0581037e1" protocol=ttrpc version=3 Jul 15 23:15:44.330589 systemd[1]: Started cri-containerd-250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885.scope - libcontainer container 250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885. Jul 15 23:15:44.393798 kubelet[3286]: E0715 23:15:44.393714 3286 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-40?timeout=10s\": context deadline exceeded" Jul 15 23:15:44.419852 containerd[1979]: time="2025-07-15T23:15:44.419348856Z" level=info msg="StartContainer for \"250b041d376f10d3a1677347b61dd05b5eb65415418c6e10ade41a7abebc6885\" returns successfully" Jul 15 23:15:47.768073 systemd[1]: cri-containerd-4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093.scope: Deactivated successfully. Jul 15 23:15:47.769453 systemd[1]: cri-containerd-4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093.scope: Consumed 4.971s CPU time, 22.9M memory peak, 372K read from disk. Jul 15 23:15:47.774786 containerd[1979]: time="2025-07-15T23:15:47.774561785Z" level=info msg="received exit event container_id:\"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\" id:\"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\" pid:3133 exit_status:1 exited_at:{seconds:1752621347 nanos:773455445}" Jul 15 23:15:47.775335 containerd[1979]: time="2025-07-15T23:15:47.774992273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\" id:\"4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093\" pid:3133 exit_status:1 exited_at:{seconds:1752621347 nanos:773455445}" Jul 15 23:15:47.828481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093-rootfs.mount: Deactivated successfully. Jul 15 23:15:48.256835 kubelet[3286]: I0715 23:15:48.256793 3286 scope.go:117] "RemoveContainer" containerID="4acab7997997e5702dcbdae71d54e087e903764486852c46a00e6ec8cfabc093" Jul 15 23:15:48.261007 containerd[1979]: time="2025-07-15T23:15:48.260918415Z" level=info msg="CreateContainer within sandbox \"a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 15 23:15:48.279642 containerd[1979]: time="2025-07-15T23:15:48.279469455Z" level=info msg="Container f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:15:48.298164 containerd[1979]: time="2025-07-15T23:15:48.298104231Z" level=info msg="CreateContainer within sandbox \"a652227f22b41d1ccfa08da46f11a6016d160aade0bb8d13b71056aab4e25dbc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af\"" Jul 15 23:15:48.299132 containerd[1979]: time="2025-07-15T23:15:48.299085399Z" level=info msg="StartContainer for \"f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af\"" Jul 15 23:15:48.301218 containerd[1979]: time="2025-07-15T23:15:48.301153756Z" level=info msg="connecting to shim f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af" address="unix:///run/containerd/s/8ffac4e307ec529794471b4c303479be398fdc82c69a20044b1c6fecbee5f0cb" protocol=ttrpc version=3 Jul 15 23:15:48.339592 systemd[1]: Started cri-containerd-f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af.scope - libcontainer container f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af. Jul 15 23:15:48.415244 containerd[1979]: time="2025-07-15T23:15:48.415183264Z" level=info msg="StartContainer for \"f2d43acec0b0f2ed604eefb2d8bca98037a928946cbdaf4756d7c58aa60322af\" returns successfully" Jul 15 23:15:52.003102 containerd[1979]: time="2025-07-15T23:15:52.002952906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"8903b9f34264abce759d02749d7d3ca3e335bc86e13c17706a9212dd7dc57b67\" pid:6673 exited_at:{seconds:1752621352 nanos:2408190}" Jul 15 23:15:52.885329 update_engine[1945]: I20250715 23:15:52.884944 1945 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 23:15:52.886004 update_engine[1945]: I20250715 23:15:52.885360 1945 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 23:15:52.886004 update_engine[1945]: I20250715 23:15:52.885725 1945 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 23:15:52.886875 update_engine[1945]: E20250715 23:15:52.886813 1945 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 23:15:52.886962 update_engine[1945]: I20250715 23:15:52.886906 1945 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 15 23:15:54.394290 kubelet[3286]: E0715 23:15:54.394190 3286 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-40?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 15 23:15:54.842664 systemd[1]: cri-containerd-2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7.scope: Deactivated successfully. Jul 15 23:15:54.847814 containerd[1979]: time="2025-07-15T23:15:54.847751604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\" id:\"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\" pid:6553 exit_status:1 exited_at:{seconds:1752621354 nanos:847331976}" Jul 15 23:15:54.848968 containerd[1979]: time="2025-07-15T23:15:54.847905720Z" level=info msg="received exit event container_id:\"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\" id:\"2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7\" pid:6553 exit_status:1 exited_at:{seconds:1752621354 nanos:847331976}" Jul 15 23:15:54.889258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7-rootfs.mount: Deactivated successfully. Jul 15 23:15:55.296760 kubelet[3286]: I0715 23:15:55.296700 3286 scope.go:117] "RemoveContainer" containerID="577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e" Jul 15 23:15:55.297301 kubelet[3286]: I0715 23:15:55.297237 3286 scope.go:117] "RemoveContainer" containerID="2d84d0ac89a79ff40764b4c5c8ba3e0cb5c03cbdbcdc7bcfc3d96e85adc990e7" Jul 15 23:15:55.298299 kubelet[3286]: E0715 23:15:55.297570 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-2f85n_tigera-operator(40a38156-cce1-4cc4-a234-8d982eb5d3a7)\"" pod="tigera-operator/tigera-operator-747864d56d-2f85n" podUID="40a38156-cce1-4cc4-a234-8d982eb5d3a7" Jul 15 23:15:55.300969 containerd[1979]: time="2025-07-15T23:15:55.300921490Z" level=info msg="RemoveContainer for \"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\"" Jul 15 23:15:55.311682 containerd[1979]: time="2025-07-15T23:15:55.311597962Z" level=info msg="RemoveContainer for \"577f74cb451c96167e966a4040cd5bf9f7f1779fae2913a61e8916b34c17588e\" returns successfully" Jul 15 23:16:01.769429 containerd[1979]: time="2025-07-15T23:16:01.769336914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2c079d693e3062f629e650fa5a36d013b889a7f7eb276d09e2d1d21c8c385f9\" id:\"64f93a2bfaedb56da4c68d03566d7297322537345995d1550809b84566c3c675\" pid:6708 exited_at:{seconds:1752621361 nanos:768671262}" Jul 15 23:16:01.806697 containerd[1979]: time="2025-07-15T23:16:01.806592883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd049586726e91d092e6e4a2b7e5b5c6e816b9f53053be9bf6fd875ef30dfa1\" id:\"e793454f9d0bebed311dfb34569257061849c421a41bbec498fc29d42b7a3a31\" pid:6728 exited_at:{seconds:1752621361 nanos:806063419}" Jul 15 23:16:02.889744 update_engine[1945]: I20250715 23:16:02.889612 1945 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 23:16:02.890992 update_engine[1945]: I20250715 23:16:02.890563 1945 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 23:16:02.890992 update_engine[1945]: I20250715 23:16:02.890933 1945 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 23:16:02.892185 update_engine[1945]: E20250715 23:16:02.892146 1945 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 23:16:02.892513 update_engine[1945]: I20250715 23:16:02.892416 1945 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 15 23:16:02.892513 update_engine[1945]: I20250715 23:16:02.892449 1945 omaha_request_action.cc:617] Omaha request response: Jul 15 23:16:02.893973 update_engine[1945]: E20250715 23:16:02.893410 1945 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893471 1945 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893486 1945 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893501 1945 update_attempter.cc:306] Processing Done. Jul 15 23:16:02.893973 update_engine[1945]: E20250715 23:16:02.893527 1945 update_attempter.cc:619] Update failed. Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893544 1945 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893558 1945 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893572 1945 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893687 1945 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893730 1945 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893746 1945 omaha_request_action.cc:272] Request: Jul 15 23:16:02.893973 update_engine[1945]: Jul 15 23:16:02.893973 update_engine[1945]: Jul 15 23:16:02.893973 update_engine[1945]: Jul 15 23:16:02.893973 update_engine[1945]: Jul 15 23:16:02.893973 update_engine[1945]: Jul 15 23:16:02.893973 update_engine[1945]: Jul 15 23:16:02.893973 update_engine[1945]: I20250715 23:16:02.893762 1945 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 23:16:02.894823 locksmithd[2015]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 15 23:16:02.895311 update_engine[1945]: I20250715 23:16:02.894991 1945 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 23:16:02.895715 update_engine[1945]: I20250715 23:16:02.895645 1945 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 23:16:02.902397 update_engine[1945]: E20250715 23:16:02.902331 1945 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 23:16:02.902496 update_engine[1945]: I20250715 23:16:02.902424 1945 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 15 23:16:02.902496 update_engine[1945]: I20250715 23:16:02.902446 1945 omaha_request_action.cc:617] Omaha request response: Jul 15 23:16:02.902496 update_engine[1945]: I20250715 23:16:02.902462 1945 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 15 23:16:02.902496 update_engine[1945]: I20250715 23:16:02.902477 1945 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 15 23:16:02.902671 update_engine[1945]: I20250715 23:16:02.902491 1945 update_attempter.cc:306] Processing Done. Jul 15 23:16:02.902671 update_engine[1945]: I20250715 23:16:02.902508 1945 update_attempter.cc:310] Error event sent. Jul 15 23:16:02.902671 update_engine[1945]: I20250715 23:16:02.902529 1945 update_check_scheduler.cc:74] Next update check in 49m58s Jul 15 23:16:02.903080 locksmithd[2015]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0