Aug 19 00:11:57.088987 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 19 00:11:57.089030 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Aug 18 22:15:14 -00 2025 Aug 19 00:11:57.089054 kernel: KASLR disabled due to lack of seed Aug 19 00:11:57.089070 kernel: efi: EFI v2.7 by EDK II Aug 19 00:11:57.089086 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Aug 19 00:11:57.089101 kernel: secureboot: Secure boot disabled Aug 19 00:11:57.089118 kernel: ACPI: Early table checksum verification disabled Aug 19 00:11:57.089133 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 19 00:11:57.089148 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 19 00:11:57.089163 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 19 00:11:57.089179 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 19 00:11:57.089198 kernel: ACPI: FACS 0x0000000078630000 000040 Aug 19 00:11:57.089213 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 19 00:11:57.089229 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 19 00:11:57.089247 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 19 00:11:57.089263 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 19 00:11:57.089283 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 19 00:11:57.089299 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 19 00:11:57.089315 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 19 00:11:57.089373 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 19 00:11:57.089392 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 19 00:11:57.089409 kernel: printk: legacy bootconsole [uart0] enabled Aug 19 00:11:57.089426 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 19 00:11:57.089442 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 19 00:11:57.089459 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Aug 19 00:11:57.089475 kernel: Zone ranges: Aug 19 00:11:57.089491 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 19 00:11:57.089512 kernel: DMA32 empty Aug 19 00:11:57.089529 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 19 00:11:57.089544 kernel: Device empty Aug 19 00:11:57.089560 kernel: Movable zone start for each node Aug 19 00:11:57.089576 kernel: Early memory node ranges Aug 19 00:11:57.089591 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 19 00:11:57.089607 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 19 00:11:57.089623 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 19 00:11:57.089639 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 19 00:11:57.089654 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 19 00:11:57.089671 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 19 00:11:57.089686 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 19 00:11:57.089707 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 19 00:11:57.089729 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 19 00:11:57.089746 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 19 00:11:57.089764 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Aug 19 00:11:57.089781 kernel: psci: probing for conduit method from ACPI. Aug 19 00:11:57.089801 kernel: psci: PSCIv1.0 detected in firmware. Aug 19 00:11:57.089818 kernel: psci: Using standard PSCI v0.2 function IDs Aug 19 00:11:57.089834 kernel: psci: Trusted OS migration not required Aug 19 00:11:57.089851 kernel: psci: SMC Calling Convention v1.1 Aug 19 00:11:57.089868 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 19 00:11:57.089884 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 19 00:11:57.089901 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 19 00:11:57.089918 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 19 00:11:57.089935 kernel: Detected PIPT I-cache on CPU0 Aug 19 00:11:57.089952 kernel: CPU features: detected: GIC system register CPU interface Aug 19 00:11:57.089968 kernel: CPU features: detected: Spectre-v2 Aug 19 00:11:57.089988 kernel: CPU features: detected: Spectre-v3a Aug 19 00:11:57.090005 kernel: CPU features: detected: Spectre-BHB Aug 19 00:11:57.090022 kernel: CPU features: detected: ARM erratum 1742098 Aug 19 00:11:57.090039 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 19 00:11:57.090056 kernel: alternatives: applying boot alternatives Aug 19 00:11:57.090075 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:11:57.090093 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 00:11:57.090110 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 00:11:57.090127 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 00:11:57.090144 kernel: Fallback order for Node 0: 0 Aug 19 00:11:57.090164 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Aug 19 00:11:57.090181 kernel: Policy zone: Normal Aug 19 00:11:57.090197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 00:11:57.090214 kernel: software IO TLB: area num 2. Aug 19 00:11:57.090231 kernel: software IO TLB: mapped [mem 0x000000006c600000-0x0000000070600000] (64MB) Aug 19 00:11:57.090247 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 19 00:11:57.090264 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 00:11:57.090282 kernel: rcu: RCU event tracing is enabled. Aug 19 00:11:57.090299 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 19 00:11:57.090317 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 00:11:57.090356 kernel: Tracing variant of Tasks RCU enabled. Aug 19 00:11:57.090374 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 00:11:57.090396 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 19 00:11:57.090414 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 00:11:57.090431 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 00:11:57.090447 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 19 00:11:57.090464 kernel: GICv3: 96 SPIs implemented Aug 19 00:11:57.090481 kernel: GICv3: 0 Extended SPIs implemented Aug 19 00:11:57.090497 kernel: Root IRQ handler: gic_handle_irq Aug 19 00:11:57.090514 kernel: GICv3: GICv3 features: 16 PPIs Aug 19 00:11:57.090530 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 19 00:11:57.090547 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 19 00:11:57.090564 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 19 00:11:57.090581 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Aug 19 00:11:57.090601 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Aug 19 00:11:57.090618 kernel: GICv3: using LPI property table @0x0000000400110000 Aug 19 00:11:57.090635 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 19 00:11:57.090652 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Aug 19 00:11:57.090669 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 00:11:57.090686 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 19 00:11:57.090703 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 19 00:11:57.090720 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 19 00:11:57.090737 kernel: Console: colour dummy device 80x25 Aug 19 00:11:57.090754 kernel: printk: legacy console [tty1] enabled Aug 19 00:11:57.090772 kernel: ACPI: Core revision 20240827 Aug 19 00:11:57.090793 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 19 00:11:57.090810 kernel: pid_max: default: 32768 minimum: 301 Aug 19 00:11:57.090827 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 00:11:57.090844 kernel: landlock: Up and running. Aug 19 00:11:57.090861 kernel: SELinux: Initializing. Aug 19 00:11:57.090878 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:11:57.090895 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:11:57.090912 kernel: rcu: Hierarchical SRCU implementation. Aug 19 00:11:57.090930 kernel: rcu: Max phase no-delay instances is 400. Aug 19 00:11:57.090951 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 00:11:57.090968 kernel: Remapping and enabling EFI services. Aug 19 00:11:57.090985 kernel: smp: Bringing up secondary CPUs ... Aug 19 00:11:57.091002 kernel: Detected PIPT I-cache on CPU1 Aug 19 00:11:57.091019 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 19 00:11:57.091036 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Aug 19 00:11:57.091053 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 19 00:11:57.091070 kernel: smp: Brought up 1 node, 2 CPUs Aug 19 00:11:57.091087 kernel: SMP: Total of 2 processors activated. Aug 19 00:11:57.091117 kernel: CPU: All CPU(s) started at EL1 Aug 19 00:11:57.091135 kernel: CPU features: detected: 32-bit EL0 Support Aug 19 00:11:57.091156 kernel: CPU features: detected: 32-bit EL1 Support Aug 19 00:11:57.091175 kernel: CPU features: detected: CRC32 instructions Aug 19 00:11:57.091192 kernel: alternatives: applying system-wide alternatives Aug 19 00:11:57.091211 kernel: Memory: 3797096K/4030464K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 212024K reserved, 16384K cma-reserved) Aug 19 00:11:57.091230 kernel: devtmpfs: initialized Aug 19 00:11:57.091251 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 00:11:57.091270 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 19 00:11:57.091288 kernel: 17056 pages in range for non-PLT usage Aug 19 00:11:57.091306 kernel: 508576 pages in range for PLT usage Aug 19 00:11:57.091339 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 00:11:57.091386 kernel: SMBIOS 3.0.0 present. Aug 19 00:11:57.091405 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 19 00:11:57.091424 kernel: DMI: Memory slots populated: 0/0 Aug 19 00:11:57.091442 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 00:11:57.091465 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 19 00:11:57.091484 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 19 00:11:57.091502 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 19 00:11:57.091520 kernel: audit: initializing netlink subsys (disabled) Aug 19 00:11:57.091538 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Aug 19 00:11:57.091556 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 00:11:57.091574 kernel: cpuidle: using governor menu Aug 19 00:11:57.091592 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 19 00:11:57.091609 kernel: ASID allocator initialised with 65536 entries Aug 19 00:11:57.091631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 00:11:57.091649 kernel: Serial: AMBA PL011 UART driver Aug 19 00:11:57.091667 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 00:11:57.091685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 00:11:57.091703 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 19 00:11:57.091720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 19 00:11:57.091738 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 00:11:57.091756 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 00:11:57.091774 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 19 00:11:57.091795 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 19 00:11:57.091813 kernel: ACPI: Added _OSI(Module Device) Aug 19 00:11:57.091831 kernel: ACPI: Added _OSI(Processor Device) Aug 19 00:11:57.091849 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 00:11:57.091867 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 00:11:57.091885 kernel: ACPI: Interpreter enabled Aug 19 00:11:57.091903 kernel: ACPI: Using GIC for interrupt routing Aug 19 00:11:57.091921 kernel: ACPI: MCFG table detected, 1 entries Aug 19 00:11:57.091938 kernel: ACPI: CPU0 has been hot-added Aug 19 00:11:57.091960 kernel: ACPI: CPU1 has been hot-added Aug 19 00:11:57.091978 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 19 00:11:57.092265 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 00:11:57.092546 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 19 00:11:57.092746 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 19 00:11:57.092933 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 19 00:11:57.093118 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 19 00:11:57.093149 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 19 00:11:57.093168 kernel: acpiphp: Slot [1] registered Aug 19 00:11:57.093186 kernel: acpiphp: Slot [2] registered Aug 19 00:11:57.093203 kernel: acpiphp: Slot [3] registered Aug 19 00:11:57.093221 kernel: acpiphp: Slot [4] registered Aug 19 00:11:57.093239 kernel: acpiphp: Slot [5] registered Aug 19 00:11:57.093256 kernel: acpiphp: Slot [6] registered Aug 19 00:11:57.093274 kernel: acpiphp: Slot [7] registered Aug 19 00:11:57.093292 kernel: acpiphp: Slot [8] registered Aug 19 00:11:57.093310 kernel: acpiphp: Slot [9] registered Aug 19 00:11:57.093353 kernel: acpiphp: Slot [10] registered Aug 19 00:11:57.093373 kernel: acpiphp: Slot [11] registered Aug 19 00:11:57.093392 kernel: acpiphp: Slot [12] registered Aug 19 00:11:57.093410 kernel: acpiphp: Slot [13] registered Aug 19 00:11:57.093427 kernel: acpiphp: Slot [14] registered Aug 19 00:11:57.093445 kernel: acpiphp: Slot [15] registered Aug 19 00:11:57.093463 kernel: acpiphp: Slot [16] registered Aug 19 00:11:57.093481 kernel: acpiphp: Slot [17] registered Aug 19 00:11:57.093499 kernel: acpiphp: Slot [18] registered Aug 19 00:11:57.093521 kernel: acpiphp: Slot [19] registered Aug 19 00:11:57.093539 kernel: acpiphp: Slot [20] registered Aug 19 00:11:57.093557 kernel: acpiphp: Slot [21] registered Aug 19 00:11:57.093574 kernel: acpiphp: Slot [22] registered Aug 19 00:11:57.093592 kernel: acpiphp: Slot [23] registered Aug 19 00:11:57.093610 kernel: acpiphp: Slot [24] registered Aug 19 00:11:57.093627 kernel: acpiphp: Slot [25] registered Aug 19 00:11:57.093645 kernel: acpiphp: Slot [26] registered Aug 19 00:11:57.093662 kernel: acpiphp: Slot [27] registered Aug 19 00:11:57.093680 kernel: acpiphp: Slot [28] registered Aug 19 00:11:57.093702 kernel: acpiphp: Slot [29] registered Aug 19 00:11:57.093720 kernel: acpiphp: Slot [30] registered Aug 19 00:11:57.093737 kernel: acpiphp: Slot [31] registered Aug 19 00:11:57.093755 kernel: PCI host bridge to bus 0000:00 Aug 19 00:11:57.093948 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 19 00:11:57.094121 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 19 00:11:57.094292 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 19 00:11:57.094532 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 19 00:11:57.094774 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Aug 19 00:11:57.094997 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Aug 19 00:11:57.095191 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Aug 19 00:11:57.095805 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Aug 19 00:11:57.096005 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Aug 19 00:11:57.096191 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 19 00:11:57.106376 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Aug 19 00:11:57.106604 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Aug 19 00:11:57.106790 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Aug 19 00:11:57.106973 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Aug 19 00:11:57.107157 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 19 00:11:57.107363 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Aug 19 00:11:57.107563 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Aug 19 00:11:57.107767 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Aug 19 00:11:57.107954 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Aug 19 00:11:57.108148 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Aug 19 00:11:57.109100 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 19 00:11:57.109513 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 19 00:11:57.109693 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 19 00:11:57.109719 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 19 00:11:57.110420 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 19 00:11:57.110442 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 19 00:11:57.110461 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 19 00:11:57.110480 kernel: iommu: Default domain type: Translated Aug 19 00:11:57.110498 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 19 00:11:57.110517 kernel: efivars: Registered efivars operations Aug 19 00:11:57.110536 kernel: vgaarb: loaded Aug 19 00:11:57.110555 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 19 00:11:57.110573 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 00:11:57.110602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 00:11:57.110621 kernel: pnp: PnP ACPI init Aug 19 00:11:57.110924 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 19 00:11:57.110961 kernel: pnp: PnP ACPI: found 1 devices Aug 19 00:11:57.110980 kernel: NET: Registered PF_INET protocol family Aug 19 00:11:57.110998 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 00:11:57.111017 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 00:11:57.111035 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 00:11:57.111061 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 00:11:57.111079 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 00:11:57.111097 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 00:11:57.111115 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:11:57.111133 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:11:57.111151 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 00:11:57.111169 kernel: PCI: CLS 0 bytes, default 64 Aug 19 00:11:57.111187 kernel: kvm [1]: HYP mode not available Aug 19 00:11:57.111205 kernel: Initialise system trusted keyrings Aug 19 00:11:57.111227 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 00:11:57.111245 kernel: Key type asymmetric registered Aug 19 00:11:57.111262 kernel: Asymmetric key parser 'x509' registered Aug 19 00:11:57.111280 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 19 00:11:57.111298 kernel: io scheduler mq-deadline registered Aug 19 00:11:57.111316 kernel: io scheduler kyber registered Aug 19 00:11:57.112377 kernel: io scheduler bfq registered Aug 19 00:11:57.112624 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 19 00:11:57.112656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 19 00:11:57.112675 kernel: ACPI: button: Power Button [PWRB] Aug 19 00:11:57.112694 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 19 00:11:57.112712 kernel: ACPI: button: Sleep Button [SLPB] Aug 19 00:11:57.112730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 00:11:57.112749 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 19 00:11:57.112952 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 19 00:11:57.112979 kernel: printk: legacy console [ttyS0] disabled Aug 19 00:11:57.112998 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 19 00:11:57.113020 kernel: printk: legacy console [ttyS0] enabled Aug 19 00:11:57.113039 kernel: printk: legacy bootconsole [uart0] disabled Aug 19 00:11:57.113057 kernel: thunder_xcv, ver 1.0 Aug 19 00:11:57.113075 kernel: thunder_bgx, ver 1.0 Aug 19 00:11:57.113093 kernel: nicpf, ver 1.0 Aug 19 00:11:57.113110 kernel: nicvf, ver 1.0 Aug 19 00:11:57.114540 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 19 00:11:57.114734 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-19T00:11:56 UTC (1755562316) Aug 19 00:11:57.114767 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 00:11:57.114786 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Aug 19 00:11:57.114805 kernel: NET: Registered PF_INET6 protocol family Aug 19 00:11:57.114823 kernel: watchdog: NMI not fully supported Aug 19 00:11:57.114842 kernel: watchdog: Hard watchdog permanently disabled Aug 19 00:11:57.114860 kernel: Segment Routing with IPv6 Aug 19 00:11:57.114878 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 00:11:57.114896 kernel: NET: Registered PF_PACKET protocol family Aug 19 00:11:57.114914 kernel: Key type dns_resolver registered Aug 19 00:11:57.114936 kernel: registered taskstats version 1 Aug 19 00:11:57.114954 kernel: Loading compiled-in X.509 certificates Aug 19 00:11:57.114972 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: becc5a61d1c5dcbcd174f4649c64b863031dbaa8' Aug 19 00:11:57.114990 kernel: Demotion targets for Node 0: null Aug 19 00:11:57.115008 kernel: Key type .fscrypt registered Aug 19 00:11:57.115025 kernel: Key type fscrypt-provisioning registered Aug 19 00:11:57.115043 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 00:11:57.115062 kernel: ima: Allocated hash algorithm: sha1 Aug 19 00:11:57.115080 kernel: ima: No architecture policies found Aug 19 00:11:57.115101 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 19 00:11:57.115120 kernel: clk: Disabling unused clocks Aug 19 00:11:57.115138 kernel: PM: genpd: Disabling unused power domains Aug 19 00:11:57.115155 kernel: Warning: unable to open an initial console. Aug 19 00:11:57.115174 kernel: Freeing unused kernel memory: 38912K Aug 19 00:11:57.115191 kernel: Run /init as init process Aug 19 00:11:57.115209 kernel: with arguments: Aug 19 00:11:57.115227 kernel: /init Aug 19 00:11:57.115244 kernel: with environment: Aug 19 00:11:57.115262 kernel: HOME=/ Aug 19 00:11:57.115284 kernel: TERM=linux Aug 19 00:11:57.115301 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 00:11:57.123368 systemd[1]: Successfully made /usr/ read-only. Aug 19 00:11:57.123433 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:11:57.123456 systemd[1]: Detected virtualization amazon. Aug 19 00:11:57.123476 systemd[1]: Detected architecture arm64. Aug 19 00:11:57.123495 systemd[1]: Running in initrd. Aug 19 00:11:57.123524 systemd[1]: No hostname configured, using default hostname. Aug 19 00:11:57.123545 systemd[1]: Hostname set to . Aug 19 00:11:57.123564 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:11:57.123583 systemd[1]: Queued start job for default target initrd.target. Aug 19 00:11:57.123602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:11:57.123622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:11:57.123643 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 00:11:57.123663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:11:57.123686 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 00:11:57.123708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 00:11:57.123730 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 00:11:57.123750 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 00:11:57.123770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:11:57.123791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:11:57.123811 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:11:57.123836 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:11:57.123856 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:11:57.123876 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:11:57.123898 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:11:57.123918 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:11:57.123938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 00:11:57.123957 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 00:11:57.123977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:11:57.124001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:11:57.124022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:11:57.124041 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:11:57.124061 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 00:11:57.124081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:11:57.124101 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 00:11:57.124121 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 00:11:57.124141 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 00:11:57.124161 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:11:57.124184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:11:57.124205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:11:57.124225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 00:11:57.124345 systemd-journald[257]: Collecting audit messages is disabled. Aug 19 00:11:57.124429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:11:57.124452 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 00:11:57.124472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:11:57.124493 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 00:11:57.124517 kernel: Bridge firewalling registered Aug 19 00:11:57.124553 systemd-journald[257]: Journal started Aug 19 00:11:57.124591 systemd-journald[257]: Runtime Journal (/run/log/journal/ec299dafadae99e1ba2e7b8385c23046) is 8M, max 75.3M, 67.3M free. Aug 19 00:11:57.081928 systemd-modules-load[259]: Inserted module 'overlay' Aug 19 00:11:57.130877 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:11:57.123823 systemd-modules-load[259]: Inserted module 'br_netfilter' Aug 19 00:11:57.135732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:11:57.140061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:11:57.142629 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:11:57.180538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:11:57.192378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:11:57.193172 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 00:11:57.203244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 00:11:57.213421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:11:57.219791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:11:57.241691 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:11:57.243130 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:11:57.283964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:11:57.295685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:11:57.303681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 00:11:57.343289 systemd-resolved[287]: Positive Trust Anchors: Aug 19 00:11:57.343316 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:11:57.343398 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:11:57.370091 dracut-cmdline[301]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:11:57.508362 kernel: SCSI subsystem initialized Aug 19 00:11:57.516360 kernel: Loading iSCSI transport class v2.0-870. Aug 19 00:11:57.529524 kernel: iscsi: registered transport (tcp) Aug 19 00:11:57.550486 kernel: iscsi: registered transport (qla4xxx) Aug 19 00:11:57.550560 kernel: QLogic iSCSI HBA Driver Aug 19 00:11:57.584513 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:11:57.622829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:11:57.634945 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:11:57.638361 kernel: random: crng init done Aug 19 00:11:57.638602 systemd-resolved[287]: Defaulting to hostname 'linux'. Aug 19 00:11:57.645602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:11:57.656914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:11:57.728900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 00:11:57.736534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 00:11:57.821374 kernel: raid6: neonx8 gen() 6490 MB/s Aug 19 00:11:57.838360 kernel: raid6: neonx4 gen() 6433 MB/s Aug 19 00:11:57.855359 kernel: raid6: neonx2 gen() 5343 MB/s Aug 19 00:11:57.872368 kernel: raid6: neonx1 gen() 3921 MB/s Aug 19 00:11:57.889375 kernel: raid6: int64x8 gen() 3634 MB/s Aug 19 00:11:57.906368 kernel: raid6: int64x4 gen() 3676 MB/s Aug 19 00:11:57.923374 kernel: raid6: int64x2 gen() 3549 MB/s Aug 19 00:11:57.941349 kernel: raid6: int64x1 gen() 2761 MB/s Aug 19 00:11:57.941394 kernel: raid6: using algorithm neonx8 gen() 6490 MB/s Aug 19 00:11:57.959337 kernel: raid6: .... xor() 4761 MB/s, rmw enabled Aug 19 00:11:57.959404 kernel: raid6: using neon recovery algorithm Aug 19 00:11:57.968055 kernel: xor: measuring software checksum speed Aug 19 00:11:57.968116 kernel: 8regs : 12936 MB/sec Aug 19 00:11:57.969317 kernel: 32regs : 12437 MB/sec Aug 19 00:11:57.971557 kernel: arm64_neon : 8778 MB/sec Aug 19 00:11:57.971592 kernel: xor: using function: 8regs (12936 MB/sec) Aug 19 00:11:58.061380 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 00:11:58.072442 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:11:58.079182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:11:58.129461 systemd-udevd[508]: Using default interface naming scheme 'v255'. Aug 19 00:11:58.139458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:11:58.154413 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 00:11:58.197791 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Aug 19 00:11:58.245115 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:11:58.259577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:11:58.401491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:11:58.418608 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 00:11:58.575255 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 19 00:11:58.575575 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 19 00:11:58.581384 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 19 00:11:58.585871 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 19 00:11:58.593433 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 19 00:11:58.593772 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 19 00:11:58.592273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:11:58.606590 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 19 00:11:58.606865 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:78:48:2a:22:01 Aug 19 00:11:58.592544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:11:58.607289 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:11:58.619436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:11:58.628019 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:11:58.634576 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 00:11:58.634612 kernel: GPT:9289727 != 16777215 Aug 19 00:11:58.634636 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 00:11:58.634660 kernel: GPT:9289727 != 16777215 Aug 19 00:11:58.637391 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 00:11:58.637449 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:11:58.642849 (udev-worker)[568]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:11:58.670918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:11:58.693366 kernel: nvme nvme0: using unchecked data buffer Aug 19 00:11:58.813190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 19 00:11:58.873648 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 19 00:11:58.879783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 00:11:58.940121 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 19 00:11:58.964731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 19 00:11:58.967808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 19 00:11:58.977198 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:11:58.980298 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:11:58.988410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:11:58.994360 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 00:11:59.016658 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 00:11:59.041366 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:11:59.041443 disk-uuid[688]: Primary Header is updated. Aug 19 00:11:59.041443 disk-uuid[688]: Secondary Entries is updated. Aug 19 00:11:59.041443 disk-uuid[688]: Secondary Header is updated. Aug 19 00:11:59.052823 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:11:59.070437 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:12:00.078366 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 00:12:00.079850 disk-uuid[694]: The operation has completed successfully. Aug 19 00:12:00.265021 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 00:12:00.265238 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 00:12:00.355175 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 00:12:00.387731 sh[954]: Success Aug 19 00:12:00.420100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 00:12:00.420214 kernel: device-mapper: uevent: version 1.0.3 Aug 19 00:12:00.420259 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 00:12:00.434364 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 19 00:12:00.537895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 00:12:00.549688 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 00:12:00.556777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 00:12:00.594239 kernel: BTRFS: device fsid 1e492084-d287-4a43-8dc6-ad086a072625 devid 1 transid 45 /dev/mapper/usr (254:0) scanned by mount (977) Aug 19 00:12:00.594301 kernel: BTRFS info (device dm-0): first mount of filesystem 1e492084-d287-4a43-8dc6-ad086a072625 Aug 19 00:12:00.594348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:12:00.597154 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 00:12:00.773153 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 00:12:00.776053 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:12:00.779791 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 00:12:00.781073 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 00:12:00.793788 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 00:12:00.856379 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Aug 19 00:12:00.862027 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:00.862100 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:12:00.863695 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 00:12:00.878412 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:00.880420 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 00:12:00.885432 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 00:12:00.985507 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:12:00.993948 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:12:01.075845 systemd-networkd[1147]: lo: Link UP Aug 19 00:12:01.077611 systemd-networkd[1147]: lo: Gained carrier Aug 19 00:12:01.081552 systemd-networkd[1147]: Enumeration completed Aug 19 00:12:01.081706 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:12:01.085663 systemd[1]: Reached target network.target - Network. Aug 19 00:12:01.088139 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:01.088146 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:12:01.102094 systemd-networkd[1147]: eth0: Link UP Aug 19 00:12:01.102107 systemd-networkd[1147]: eth0: Gained carrier Aug 19 00:12:01.102129 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:01.122399 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.18.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 19 00:12:01.538816 ignition[1065]: Ignition 2.21.0 Aug 19 00:12:01.538846 ignition[1065]: Stage: fetch-offline Aug 19 00:12:01.542723 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:12:01.539797 ignition[1065]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:01.548936 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 19 00:12:01.539829 ignition[1065]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:01.540608 ignition[1065]: Ignition finished successfully Aug 19 00:12:01.595305 ignition[1160]: Ignition 2.21.0 Aug 19 00:12:01.596236 ignition[1160]: Stage: fetch Aug 19 00:12:01.596838 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:01.596862 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:01.597136 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:01.610976 ignition[1160]: PUT result: OK Aug 19 00:12:01.619728 ignition[1160]: parsed url from cmdline: "" Aug 19 00:12:01.621044 ignition[1160]: no config URL provided Aug 19 00:12:01.621070 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 00:12:01.621103 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Aug 19 00:12:01.621153 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:01.630506 ignition[1160]: PUT result: OK Aug 19 00:12:01.630659 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 19 00:12:01.634962 ignition[1160]: GET result: OK Aug 19 00:12:01.635137 ignition[1160]: parsing config with SHA512: c9a9bb4a1ebab980850f8078979edaae776031bf793720074d33b0fc618c55653acdae0c000fdf4d440c36849c1ce4083737919765df1cf1bfbba0d75cea0801 Aug 19 00:12:01.644477 unknown[1160]: fetched base config from "system" Aug 19 00:12:01.646567 unknown[1160]: fetched base config from "system" Aug 19 00:12:01.646586 unknown[1160]: fetched user config from "aws" Aug 19 00:12:01.649023 ignition[1160]: fetch: fetch complete Aug 19 00:12:01.649035 ignition[1160]: fetch: fetch passed Aug 19 00:12:01.649128 ignition[1160]: Ignition finished successfully Aug 19 00:12:01.656986 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 19 00:12:01.664284 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 00:12:01.711120 ignition[1166]: Ignition 2.21.0 Aug 19 00:12:01.711737 ignition[1166]: Stage: kargs Aug 19 00:12:01.712281 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:01.712305 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:01.712488 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:01.714505 ignition[1166]: PUT result: OK Aug 19 00:12:01.725506 ignition[1166]: kargs: kargs passed Aug 19 00:12:01.726663 ignition[1166]: Ignition finished successfully Aug 19 00:12:01.732378 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 00:12:01.738153 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 00:12:01.774319 ignition[1172]: Ignition 2.21.0 Aug 19 00:12:01.774391 ignition[1172]: Stage: disks Aug 19 00:12:01.774949 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:01.774983 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:01.775132 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:01.796361 ignition[1172]: PUT result: OK Aug 19 00:12:01.802509 ignition[1172]: disks: disks passed Aug 19 00:12:01.802603 ignition[1172]: Ignition finished successfully Aug 19 00:12:01.812431 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 00:12:01.817067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 00:12:01.821897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 00:12:01.826909 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:12:01.829127 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:12:01.833258 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:12:01.840436 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 00:12:01.906106 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 00:12:01.912760 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 00:12:01.920480 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 00:12:02.055760 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 593a9299-85f8-44ab-a00f-cf95b7233713 r/w with ordered data mode. Quota mode: none. Aug 19 00:12:02.056968 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 00:12:02.068644 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 00:12:02.083320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:12:02.087499 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 00:12:02.095971 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 00:12:02.096073 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 00:12:02.096122 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:12:02.126789 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 00:12:02.133134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 00:12:02.147394 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1199) Aug 19 00:12:02.147461 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:02.150434 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:12:02.150490 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 00:12:02.158978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:12:02.502912 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 00:12:02.524734 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Aug 19 00:12:02.533641 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 00:12:02.542249 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 00:12:02.832665 systemd-networkd[1147]: eth0: Gained IPv6LL Aug 19 00:12:02.884627 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 00:12:02.891459 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 00:12:02.899136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 00:12:02.931460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 00:12:02.934443 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:02.963427 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 00:12:02.980409 ignition[1312]: INFO : Ignition 2.21.0 Aug 19 00:12:02.980409 ignition[1312]: INFO : Stage: mount Aug 19 00:12:02.984015 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:02.988561 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:02.988561 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:02.994492 ignition[1312]: INFO : PUT result: OK Aug 19 00:12:02.999986 ignition[1312]: INFO : mount: mount passed Aug 19 00:12:03.001852 ignition[1312]: INFO : Ignition finished successfully Aug 19 00:12:03.005107 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 00:12:03.011310 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 00:12:03.060355 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:12:03.103439 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1324) Aug 19 00:12:03.103505 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:12:03.107062 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:12:03.107110 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 00:12:03.116494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:12:03.162161 ignition[1341]: INFO : Ignition 2.21.0 Aug 19 00:12:03.162161 ignition[1341]: INFO : Stage: files Aug 19 00:12:03.165741 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:03.165741 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:03.165741 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:03.180714 ignition[1341]: INFO : PUT result: OK Aug 19 00:12:03.187701 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Aug 19 00:12:03.190640 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 00:12:03.190640 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 00:12:03.227755 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 00:12:03.232592 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 00:12:03.236770 unknown[1341]: wrote ssh authorized keys file for user: core Aug 19 00:12:03.239345 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 00:12:03.245378 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 19 00:12:03.245378 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Aug 19 00:12:03.362229 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 00:12:07.017646 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 19 00:12:07.022519 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 19 00:12:07.022519 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 00:12:07.022519 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:12:07.022519 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:12:07.022519 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:12:07.022519 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:12:07.044838 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:12:07.044838 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:12:07.056289 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:12:07.060463 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:12:07.064180 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 19 00:12:07.064180 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 19 00:12:07.064180 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 19 00:12:07.064180 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 19 00:12:07.844435 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 19 00:12:08.235399 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 19 00:12:08.235399 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 19 00:12:08.243299 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:12:08.248252 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:12:08.248252 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 19 00:12:08.248252 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 19 00:12:08.248252 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 00:12:08.248252 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:12:08.248252 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:12:08.271905 ignition[1341]: INFO : files: files passed Aug 19 00:12:08.271905 ignition[1341]: INFO : Ignition finished successfully Aug 19 00:12:08.262870 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 00:12:08.276211 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 00:12:08.291721 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 00:12:08.305881 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 00:12:08.306062 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 00:12:08.329099 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:12:08.329099 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:12:08.337241 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:12:08.343374 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:12:08.346758 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 00:12:08.352571 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 00:12:08.438515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 00:12:08.438930 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 00:12:08.449447 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 00:12:08.451941 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 00:12:08.457030 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 00:12:08.462127 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 00:12:08.522495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:12:08.531562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 00:12:08.585200 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:12:08.590198 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:12:08.590664 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 00:12:08.595561 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 00:12:08.595864 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:12:08.606640 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 00:12:08.609071 systemd[1]: Stopped target basic.target - Basic System. Aug 19 00:12:08.615105 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 00:12:08.618125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:12:08.625905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 00:12:08.630738 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:12:08.634488 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 00:12:08.643942 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:12:08.646761 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 00:12:08.649521 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 00:12:08.656251 systemd[1]: Stopped target swap.target - Swaps. Aug 19 00:12:08.659102 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 00:12:08.659437 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:12:08.667148 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:12:08.672066 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:12:08.676675 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 00:12:08.678595 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:12:08.681844 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 00:12:08.682148 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 00:12:08.691566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 00:12:08.691880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:12:08.701234 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 00:12:08.701569 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 00:12:08.711969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 00:12:08.723717 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 00:12:08.733231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 00:12:08.733562 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:12:08.741950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 00:12:08.742188 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:12:08.760755 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 00:12:08.763615 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 00:12:08.785785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 00:12:08.788470 ignition[1395]: INFO : Ignition 2.21.0 Aug 19 00:12:08.788470 ignition[1395]: INFO : Stage: umount Aug 19 00:12:08.788470 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:12:08.788470 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 19 00:12:08.788470 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 19 00:12:08.801643 ignition[1395]: INFO : PUT result: OK Aug 19 00:12:08.809022 ignition[1395]: INFO : umount: umount passed Aug 19 00:12:08.809022 ignition[1395]: INFO : Ignition finished successfully Aug 19 00:12:08.811092 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 00:12:08.813050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 00:12:08.820613 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 00:12:08.820790 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 00:12:08.825057 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 00:12:08.825202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 00:12:08.830031 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 00:12:08.830128 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 00:12:08.832690 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 19 00:12:08.832769 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 19 00:12:08.835354 systemd[1]: Stopped target network.target - Network. Aug 19 00:12:08.840931 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 00:12:08.841025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:12:08.843083 systemd[1]: Stopped target paths.target - Path Units. Aug 19 00:12:08.849258 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 00:12:08.853309 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:12:08.856215 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 00:12:08.858697 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 00:12:08.864343 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 00:12:08.864425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:12:08.868619 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 00:12:08.868689 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:12:08.871487 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 00:12:08.871580 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 00:12:08.875045 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 00:12:08.875174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 00:12:08.882543 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 00:12:08.882631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 00:12:08.887409 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 00:12:08.889875 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 00:12:08.909248 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 00:12:08.909490 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 00:12:08.927754 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 00:12:08.928212 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 00:12:08.928460 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 00:12:08.942877 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 00:12:08.944121 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 00:12:08.956463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 00:12:08.956542 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:12:08.970053 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 00:12:08.976459 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 00:12:08.978829 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:12:08.985700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:12:08.985817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:12:08.991114 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 00:12:08.991206 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 00:12:08.998441 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 00:12:08.998557 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:12:09.008637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:12:09.015903 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 00:12:09.016061 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:12:09.030898 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 00:12:09.033514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:12:09.039859 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 00:12:09.040206 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 00:12:09.047984 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 00:12:09.048276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:12:09.051818 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 00:12:09.051913 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:12:09.052877 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 00:12:09.052953 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 00:12:09.065477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 00:12:09.065927 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:12:09.076959 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 00:12:09.079818 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 00:12:09.079931 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:12:09.093044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 00:12:09.093149 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:12:09.098753 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 19 00:12:09.098866 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:12:09.108901 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 00:12:09.109996 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:12:09.116517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:12:09.118710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:12:09.126919 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 00:12:09.127036 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 19 00:12:09.127116 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 00:12:09.127199 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:12:09.129010 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 00:12:09.131634 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 00:12:09.151275 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 00:12:09.151715 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 00:12:09.163105 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 00:12:09.167796 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 00:12:09.213593 systemd[1]: Switching root. Aug 19 00:12:09.273637 systemd-journald[257]: Journal stopped Aug 19 00:12:11.814736 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Aug 19 00:12:11.814858 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 00:12:11.814899 kernel: SELinux: policy capability open_perms=1 Aug 19 00:12:11.814940 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 00:12:11.814971 kernel: SELinux: policy capability always_check_network=0 Aug 19 00:12:11.814999 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 00:12:11.815028 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 00:12:11.815055 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 00:12:11.815083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 00:12:11.815110 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 00:12:11.815138 kernel: audit: type=1403 audit(1755562329.802:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 00:12:11.815182 systemd[1]: Successfully loaded SELinux policy in 103.201ms. Aug 19 00:12:11.815226 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.800ms. Aug 19 00:12:11.815259 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:12:11.815290 systemd[1]: Detected virtualization amazon. Aug 19 00:12:11.815320 systemd[1]: Detected architecture arm64. Aug 19 00:12:11.815375 systemd[1]: Detected first boot. Aug 19 00:12:11.815406 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:12:11.815437 zram_generator::config[1439]: No configuration found. Aug 19 00:12:11.815488 kernel: NET: Registered PF_VSOCK protocol family Aug 19 00:12:11.815522 systemd[1]: Populated /etc with preset unit settings. Aug 19 00:12:11.815555 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 00:12:11.815585 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 00:12:11.815617 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 00:12:11.815645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 00:12:11.815675 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 00:12:11.815705 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 00:12:11.815733 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 00:12:11.815764 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 00:12:11.815795 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 00:12:11.815826 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 00:12:11.815854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 00:12:11.815884 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 00:12:11.815914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:12:11.815942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:12:11.815972 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 00:12:11.816002 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 00:12:11.816035 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 00:12:11.816064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:12:11.816091 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 00:12:11.816122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:12:11.816153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:12:11.816197 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 00:12:11.816228 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 00:12:11.816258 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 00:12:11.816291 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 00:12:11.818359 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:12:11.818406 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:12:11.818437 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:12:11.818470 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:12:11.818499 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 00:12:11.818529 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 00:12:11.818556 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 00:12:11.818584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:12:11.818618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:12:11.818650 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:12:11.818678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 00:12:11.818708 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 00:12:11.818738 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 00:12:11.818770 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 00:12:11.818797 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 00:12:11.818825 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 00:12:11.818852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 00:12:11.818885 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 00:12:11.818914 systemd[1]: Reached target machines.target - Containers. Aug 19 00:12:11.818944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 00:12:11.818972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:11.819000 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:12:11.819027 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 00:12:11.819055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:12:11.819083 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:12:11.819111 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:12:11.819144 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 00:12:11.819172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:12:11.819201 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 00:12:11.819231 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 00:12:11.819261 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 00:12:11.819292 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 00:12:11.819320 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 00:12:11.819372 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:11.819407 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:12:11.819437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:12:11.819465 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:12:11.819494 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 00:12:11.819522 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 00:12:11.819549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:12:11.819581 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 00:12:11.819610 systemd[1]: Stopped verity-setup.service. Aug 19 00:12:11.819641 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 00:12:11.819672 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 00:12:11.819704 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 00:12:11.819733 kernel: fuse: init (API version 7.41) Aug 19 00:12:11.819759 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 00:12:11.819789 kernel: loop: module loaded Aug 19 00:12:11.819815 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 00:12:11.819846 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 00:12:11.819876 kernel: ACPI: bus type drm_connector registered Aug 19 00:12:11.819902 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 00:12:11.819930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:12:11.819962 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 00:12:11.819990 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 00:12:11.820018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:12:11.820045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:12:11.820116 systemd-journald[1525]: Collecting audit messages is disabled. Aug 19 00:12:11.820185 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:12:11.820219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:12:11.820248 systemd-journald[1525]: Journal started Aug 19 00:12:11.820298 systemd-journald[1525]: Runtime Journal (/run/log/journal/ec299dafadae99e1ba2e7b8385c23046) is 8M, max 75.3M, 67.3M free. Aug 19 00:12:11.822906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:12:11.173744 systemd[1]: Queued start job for default target multi-user.target. Aug 19 00:12:11.823463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:12:11.200150 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 19 00:12:11.200987 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 00:12:11.834348 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:12:11.842316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 00:12:11.842822 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 00:12:11.848736 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:12:11.849110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:12:11.857302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:12:11.863831 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:12:11.870622 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 00:12:11.877208 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 00:12:11.904498 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:12:11.916583 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 00:12:11.924671 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 00:12:11.932743 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 00:12:11.932821 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:12:11.940758 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 00:12:11.954282 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 00:12:11.957400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:11.959475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 00:12:11.971745 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 00:12:11.980773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:12:11.984576 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 00:12:11.989943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:12:11.996120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:12:12.014481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 00:12:12.028743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:12:12.036106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:12:12.046716 systemd-journald[1525]: Time spent on flushing to /var/log/journal/ec299dafadae99e1ba2e7b8385c23046 is 88.438ms for 929 entries. Aug 19 00:12:12.046716 systemd-journald[1525]: System Journal (/var/log/journal/ec299dafadae99e1ba2e7b8385c23046) is 8M, max 195.6M, 187.6M free. Aug 19 00:12:12.151726 systemd-journald[1525]: Received client request to flush runtime journal. Aug 19 00:12:12.152523 kernel: loop0: detected capacity change from 0 to 119320 Aug 19 00:12:12.044485 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 00:12:12.055758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 00:12:12.064728 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 00:12:12.075623 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 00:12:12.089756 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 00:12:12.142694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:12:12.157411 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 00:12:12.169713 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Aug 19 00:12:12.169754 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Aug 19 00:12:12.179203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:12:12.186866 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 00:12:12.196890 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 00:12:12.206286 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 00:12:12.271362 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 00:12:12.294416 kernel: loop1: detected capacity change from 0 to 211168 Aug 19 00:12:12.301814 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 00:12:12.306474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:12:12.343268 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Aug 19 00:12:12.343313 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Aug 19 00:12:12.349999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:12:12.414382 kernel: loop2: detected capacity change from 0 to 100608 Aug 19 00:12:12.540401 kernel: loop3: detected capacity change from 0 to 61256 Aug 19 00:12:12.581391 kernel: loop4: detected capacity change from 0 to 119320 Aug 19 00:12:12.605378 kernel: loop5: detected capacity change from 0 to 211168 Aug 19 00:12:12.638383 kernel: loop6: detected capacity change from 0 to 100608 Aug 19 00:12:12.657388 kernel: loop7: detected capacity change from 0 to 61256 Aug 19 00:12:12.676267 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 19 00:12:12.677252 (sd-merge)[1600]: Merged extensions into '/usr'. Aug 19 00:12:12.688485 systemd[1]: Reload requested from client PID 1573 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 00:12:12.688510 systemd[1]: Reloading... Aug 19 00:12:12.886488 zram_generator::config[1632]: No configuration found. Aug 19 00:12:13.301613 systemd[1]: Reloading finished in 612 ms. Aug 19 00:12:13.323419 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 00:12:13.327685 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 00:12:13.344522 systemd[1]: Starting ensure-sysext.service... Aug 19 00:12:13.350583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:12:13.359711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:12:13.403452 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 00:12:13.405611 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 00:12:13.407030 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 00:12:13.408660 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 00:12:13.409479 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Aug 19 00:12:13.409496 systemd[1]: Reloading... Aug 19 00:12:13.411669 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 00:12:13.412286 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Aug 19 00:12:13.412462 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Aug 19 00:12:13.431523 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:12:13.431550 systemd-tmpfiles[1679]: Skipping /boot Aug 19 00:12:13.478083 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:12:13.478117 systemd-tmpfiles[1679]: Skipping /boot Aug 19 00:12:13.533841 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Aug 19 00:12:13.540544 zram_generator::config[1708]: No configuration found. Aug 19 00:12:13.600722 ldconfig[1568]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 00:12:13.905667 (udev-worker)[1727]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:12:14.149286 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 00:12:14.150562 systemd[1]: Reloading finished in 740 ms. Aug 19 00:12:14.165452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:12:14.169005 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 00:12:14.172204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:12:14.230702 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:12:14.238957 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 00:12:14.246052 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 00:12:14.255547 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:12:14.262839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:12:14.271557 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 00:12:14.284582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:14.288981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:12:14.300625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:12:14.307905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:12:14.310623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:14.310854 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:14.322930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:14.323245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:14.323462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:14.343889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:12:14.353409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:12:14.358195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:12:14.358470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:12:14.358812 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 00:12:14.365469 systemd[1]: Finished ensure-sysext.service. Aug 19 00:12:14.372676 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 00:12:14.377215 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:12:14.378529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:12:14.403235 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 00:12:14.465428 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 00:12:14.468781 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:12:14.489170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:12:14.494434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:12:14.498135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:12:14.518091 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 00:12:14.526670 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 00:12:14.529524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:12:14.529882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:12:14.534509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:12:14.538997 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:12:14.541419 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:12:14.580561 augenrules[1895]: No rules Aug 19 00:12:14.585133 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:12:14.586290 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:12:14.607815 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 00:12:14.749042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:12:14.863358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 19 00:12:14.868878 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 00:12:14.937862 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 00:12:14.961962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 00:12:14.987582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:12:15.089216 systemd-networkd[1830]: lo: Link UP Aug 19 00:12:15.089236 systemd-networkd[1830]: lo: Gained carrier Aug 19 00:12:15.091979 systemd-networkd[1830]: Enumeration completed Aug 19 00:12:15.092200 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:12:15.094964 systemd-networkd[1830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:15.094973 systemd-networkd[1830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:12:15.104680 systemd-resolved[1831]: Positive Trust Anchors: Aug 19 00:12:15.105171 systemd-resolved[1831]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:12:15.105724 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 00:12:15.105751 systemd-resolved[1831]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:12:15.131537 systemd-resolved[1831]: Defaulting to hostname 'linux'. Aug 19 00:12:15.137545 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 00:12:15.161309 systemd-networkd[1830]: eth0: Link UP Aug 19 00:12:15.161673 systemd-networkd[1830]: eth0: Gained carrier Aug 19 00:12:15.161723 systemd-networkd[1830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:12:15.163605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:12:15.166605 systemd[1]: Reached target network.target - Network. Aug 19 00:12:15.168572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:12:15.176412 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:12:15.178871 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 00:12:15.181753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 00:12:15.184850 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 00:12:15.187689 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 00:12:15.192433 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 00:12:15.194869 systemd-networkd[1830]: eth0: DHCPv4 address 172.31.18.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 19 00:12:15.200396 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 00:12:15.200474 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:12:15.202498 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:12:15.206418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 00:12:15.212858 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 00:12:15.221278 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 00:12:15.224474 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 00:12:15.228853 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 00:12:15.241567 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 00:12:15.245026 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 00:12:15.250398 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 00:12:15.253470 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 00:12:15.256695 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:12:15.258947 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:12:15.261401 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:12:15.261469 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:12:15.264226 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 00:12:15.271251 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 19 00:12:15.277633 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 00:12:15.285085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 00:12:15.297429 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 00:12:15.304255 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 00:12:15.306593 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 00:12:15.313257 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 00:12:15.320760 systemd[1]: Started ntpd.service - Network Time Service. Aug 19 00:12:15.325706 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 00:12:15.339928 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 19 00:12:15.348895 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 00:12:15.369164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 00:12:15.382363 jq[1968]: false Aug 19 00:12:15.394499 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 00:12:15.399054 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 00:12:15.399905 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 00:12:15.405745 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 00:12:15.415970 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 00:12:15.437408 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 00:12:15.439826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 00:12:15.441431 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 00:12:15.452279 jq[1980]: true Aug 19 00:12:15.498489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 00:12:15.499504 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 00:12:15.530428 tar[1986]: linux-arm64/LICENSE Aug 19 00:12:15.530428 tar[1986]: linux-arm64/helm Aug 19 00:12:15.536208 extend-filesystems[1969]: Found /dev/nvme0n1p6 Aug 19 00:12:15.536621 (ntainerd)[1995]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 00:12:15.556551 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Mon Aug 18 21:29:50 UTC 2025 (1): Starting Aug 19 00:12:15.559075 jq[1987]: true Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Mon Aug 18 21:29:50 UTC 2025 (1): Starting Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: ---------------------------------------------------- Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: corporation. Support and training for ntp-4 are Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: available at https://www.nwtime.org/support Aug 19 00:12:15.559390 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: ---------------------------------------------------- Aug 19 00:12:15.556627 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 19 00:12:15.556647 ntpd[1971]: ---------------------------------------------------- Aug 19 00:12:15.556667 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Aug 19 00:12:15.556685 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 19 00:12:15.575538 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: proto: precision = 0.108 usec (-23) Aug 19 00:12:15.575538 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: basedate set to 2025-08-06 Aug 19 00:12:15.575538 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: gps base set to 2025-08-10 (week 2379) Aug 19 00:12:15.575701 extend-filesystems[1969]: Found /dev/nvme0n1p9 Aug 19 00:12:15.556702 ntpd[1971]: corporation. Support and training for ntp-4 are Aug 19 00:12:15.578045 extend-filesystems[1969]: Checking size of /dev/nvme0n1p9 Aug 19 00:12:15.556721 ntpd[1971]: available at https://www.nwtime.org/support Aug 19 00:12:15.556737 ntpd[1971]: ---------------------------------------------------- Aug 19 00:12:15.567170 ntpd[1971]: proto: precision = 0.108 usec (-23) Aug 19 00:12:15.572934 ntpd[1971]: basedate set to 2025-08-06 Aug 19 00:12:15.572966 ntpd[1971]: gps base set to 2025-08-10 (week 2379) Aug 19 00:12:15.583875 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Aug 19 00:12:15.585045 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Aug 19 00:12:15.585045 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 19 00:12:15.585045 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Aug 19 00:12:15.583968 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 19 00:12:15.584254 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Aug 19 00:12:15.584320 ntpd[1971]: Listen normally on 3 eth0 172.31.18.236:123 Aug 19 00:12:15.588877 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Listen normally on 3 eth0 172.31.18.236:123 Aug 19 00:12:15.588877 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Listen normally on 4 lo [::1]:123 Aug 19 00:12:15.588877 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: bind(21) AF_INET6 fe80::478:48ff:fe2a:2201%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 00:12:15.588877 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: unable to create socket on eth0 (5) for fe80::478:48ff:fe2a:2201%2#123 Aug 19 00:12:15.588877 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: failed to init interface for address fe80::478:48ff:fe2a:2201%2 Aug 19 00:12:15.588877 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: Listening on routing socket on fd #21 for interface updates Aug 19 00:12:15.586580 ntpd[1971]: Listen normally on 4 lo [::1]:123 Aug 19 00:12:15.586666 ntpd[1971]: bind(21) AF_INET6 fe80::478:48ff:fe2a:2201%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 00:12:15.586703 ntpd[1971]: unable to create socket on eth0 (5) for fe80::478:48ff:fe2a:2201%2#123 Aug 19 00:12:15.586729 ntpd[1971]: failed to init interface for address fe80::478:48ff:fe2a:2201%2 Aug 19 00:12:15.586785 ntpd[1971]: Listening on routing socket on fd #21 for interface updates Aug 19 00:12:15.620817 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 00:12:15.621247 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 00:12:15.632774 dbus-daemon[1966]: [system] SELinux support is enabled Aug 19 00:12:15.633298 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 00:12:15.644792 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:15.644853 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:15.645027 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:15.645027 ntpd[1971]: 19 Aug 00:12:15 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 00:12:15.647760 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 00:12:15.647824 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 00:12:15.650797 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 00:12:15.650849 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 00:12:15.666964 dbus-daemon[1966]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1830 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 19 00:12:15.671685 extend-filesystems[1969]: Resized partition /dev/nvme0n1p9 Aug 19 00:12:15.682932 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 19 00:12:15.689789 coreos-metadata[1965]: Aug 19 00:12:15.689 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 19 00:12:15.692962 extend-filesystems[2023]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 00:12:15.700517 coreos-metadata[1965]: Aug 19 00:12:15.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 19 00:12:15.710458 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 00:12:15.717408 coreos-metadata[1965]: Aug 19 00:12:15.716 INFO Fetch successful Aug 19 00:12:15.717408 coreos-metadata[1965]: Aug 19 00:12:15.716 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 19 00:12:15.718291 coreos-metadata[1965]: Aug 19 00:12:15.718 INFO Fetch successful Aug 19 00:12:15.718291 coreos-metadata[1965]: Aug 19 00:12:15.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 19 00:12:15.721062 coreos-metadata[1965]: Aug 19 00:12:15.721 INFO Fetch successful Aug 19 00:12:15.721062 coreos-metadata[1965]: Aug 19 00:12:15.721 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 19 00:12:15.728811 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 19 00:12:15.740529 coreos-metadata[1965]: Aug 19 00:12:15.740 INFO Fetch successful Aug 19 00:12:15.740529 coreos-metadata[1965]: Aug 19 00:12:15.740 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 19 00:12:15.744596 coreos-metadata[1965]: Aug 19 00:12:15.744 INFO Fetch failed with 404: resource not found Aug 19 00:12:15.744596 coreos-metadata[1965]: Aug 19 00:12:15.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 19 00:12:15.753880 coreos-metadata[1965]: Aug 19 00:12:15.753 INFO Fetch successful Aug 19 00:12:15.753880 coreos-metadata[1965]: Aug 19 00:12:15.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 19 00:12:15.756360 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 19 00:12:15.761423 coreos-metadata[1965]: Aug 19 00:12:15.761 INFO Fetch successful Aug 19 00:12:15.761423 coreos-metadata[1965]: Aug 19 00:12:15.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 19 00:12:15.766817 coreos-metadata[1965]: Aug 19 00:12:15.766 INFO Fetch successful Aug 19 00:12:15.766817 coreos-metadata[1965]: Aug 19 00:12:15.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 19 00:12:15.771451 coreos-metadata[1965]: Aug 19 00:12:15.771 INFO Fetch successful Aug 19 00:12:15.771451 coreos-metadata[1965]: Aug 19 00:12:15.771 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 19 00:12:15.774378 coreos-metadata[1965]: Aug 19 00:12:15.774 INFO Fetch successful Aug 19 00:12:15.796497 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 19 00:12:15.818887 update_engine[1979]: I20250819 00:12:15.818455 1979 main.cc:92] Flatcar Update Engine starting Aug 19 00:12:15.837566 systemd[1]: Started update-engine.service - Update Engine. Aug 19 00:12:15.845461 update_engine[1979]: I20250819 00:12:15.841832 1979 update_check_scheduler.cc:74] Next update check in 8m7s Aug 19 00:12:15.887806 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 00:12:15.976364 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 19 00:12:15.982370 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 19 00:12:15.985158 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 00:12:16.008174 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 19 00:12:16.008174 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 00:12:16.008174 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 19 00:12:16.027362 extend-filesystems[1969]: Resized filesystem in /dev/nvme0n1p9 Aug 19 00:12:16.015166 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 00:12:16.016690 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 00:12:16.031568 bash[2051]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:12:16.037422 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 00:12:16.043194 systemd[1]: Starting sshkeys.service... Aug 19 00:12:16.102320 systemd-logind[1978]: Watching system buttons on /dev/input/event0 (Power Button) Aug 19 00:12:16.103442 systemd-logind[1978]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 19 00:12:16.103913 systemd-logind[1978]: New seat seat0. Aug 19 00:12:16.107375 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 00:12:16.125975 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 19 00:12:16.134208 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 19 00:12:16.321363 containerd[1995]: time="2025-08-19T00:12:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 00:12:16.324362 containerd[1995]: time="2025-08-19T00:12:16.322941338Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.441540819Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.668µs" Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.441610071Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.441645159Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.441926067Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.441958695Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.442010619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.442118559Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:12:16.442358 containerd[1995]: time="2025-08-19T00:12:16.442144731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:12:16.448787 containerd[1995]: time="2025-08-19T00:12:16.448726143Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.452374815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.452451387Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.452475495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.452696643Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.453114927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.453183015Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:12:16.457575 containerd[1995]: time="2025-08-19T00:12:16.453208383Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 00:12:16.461521 containerd[1995]: time="2025-08-19T00:12:16.460388211Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 00:12:16.462247 containerd[1995]: time="2025-08-19T00:12:16.462188487Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 00:12:16.462436 containerd[1995]: time="2025-08-19T00:12:16.462394083Z" level=info msg="metadata content store policy set" policy=shared Aug 19 00:12:16.467251 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 19 00:12:16.471935 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 19 00:12:16.474842 dbus-daemon[1966]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 19 00:12:16.487741 systemd[1]: Starting polkit.service - Authorization Manager... Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490359747Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490591839Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490630491Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490685247Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490718331Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490773831Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490808379Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490865619Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490906251Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490966275Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.490994991Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 00:12:16.491161 containerd[1995]: time="2025-08-19T00:12:16.491054079Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 00:12:16.492359 containerd[1995]: time="2025-08-19T00:12:16.492286095Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 00:12:16.493524 containerd[1995]: time="2025-08-19T00:12:16.493429479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496698843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496788939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496821399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496850331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496892547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496920495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496950051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.496977315Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 00:12:16.498210 containerd[1995]: time="2025-08-19T00:12:16.497006955Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 00:12:16.508447 containerd[1995]: time="2025-08-19T00:12:16.497420919Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 00:12:16.508447 containerd[1995]: time="2025-08-19T00:12:16.500825079Z" level=info msg="Start snapshots syncer" Aug 19 00:12:16.508447 containerd[1995]: time="2025-08-19T00:12:16.500939463Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 00:12:16.511461 containerd[1995]: time="2025-08-19T00:12:16.510240543Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 00:12:16.514766 containerd[1995]: time="2025-08-19T00:12:16.514672551Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 00:12:16.522475 containerd[1995]: time="2025-08-19T00:12:16.522375135Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 00:12:16.522757 containerd[1995]: time="2025-08-19T00:12:16.522706791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 00:12:16.522844 containerd[1995]: time="2025-08-19T00:12:16.522786603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 00:12:16.522844 containerd[1995]: time="2025-08-19T00:12:16.522823299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 00:12:16.522930 containerd[1995]: time="2025-08-19T00:12:16.522862719Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 00:12:16.522930 containerd[1995]: time="2025-08-19T00:12:16.522905787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 00:12:16.523011 containerd[1995]: time="2025-08-19T00:12:16.522944331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 00:12:16.523011 containerd[1995]: time="2025-08-19T00:12:16.522976755Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 00:12:16.523100 containerd[1995]: time="2025-08-19T00:12:16.523056447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 00:12:16.523146 containerd[1995]: time="2025-08-19T00:12:16.523096695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 00:12:16.523190 containerd[1995]: time="2025-08-19T00:12:16.523132299Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 00:12:16.523263 containerd[1995]: time="2025-08-19T00:12:16.523215903Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:12:16.523315 containerd[1995]: time="2025-08-19T00:12:16.523270407Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:12:16.523398 containerd[1995]: time="2025-08-19T00:12:16.523306395Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:12:16.544546 containerd[1995]: time="2025-08-19T00:12:16.540407667Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:12:16.544712 containerd[1995]: time="2025-08-19T00:12:16.544661631Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 00:12:16.544874 containerd[1995]: time="2025-08-19T00:12:16.544831155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 00:12:16.545029 containerd[1995]: time="2025-08-19T00:12:16.544986627Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 00:12:16.545505 containerd[1995]: time="2025-08-19T00:12:16.545462331Z" level=info msg="runtime interface created" Aug 19 00:12:16.545505 containerd[1995]: time="2025-08-19T00:12:16.545495055Z" level=info msg="created NRI interface" Aug 19 00:12:16.546704 containerd[1995]: time="2025-08-19T00:12:16.546642219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 00:12:16.547171 containerd[1995]: time="2025-08-19T00:12:16.547127139Z" level=info msg="Connect containerd service" Aug 19 00:12:16.550508 containerd[1995]: time="2025-08-19T00:12:16.550415727Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 00:12:16.550749 coreos-metadata[2086]: Aug 19 00:12:16.550 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 19 00:12:16.555047 coreos-metadata[2086]: Aug 19 00:12:16.554 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 19 00:12:16.556623 coreos-metadata[2086]: Aug 19 00:12:16.556 INFO Fetch successful Aug 19 00:12:16.557984 coreos-metadata[2086]: Aug 19 00:12:16.557 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 19 00:12:16.558914 coreos-metadata[2086]: Aug 19 00:12:16.558 INFO Fetch successful Aug 19 00:12:16.559697 ntpd[1971]: bind(24) AF_INET6 fe80::478:48ff:fe2a:2201%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 00:12:16.559758 ntpd[1971]: unable to create socket on eth0 (6) for fe80::478:48ff:fe2a:2201%2#123 Aug 19 00:12:16.560197 ntpd[1971]: 19 Aug 00:12:16 ntpd[1971]: bind(24) AF_INET6 fe80::478:48ff:fe2a:2201%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 00:12:16.560197 ntpd[1971]: 19 Aug 00:12:16 ntpd[1971]: unable to create socket on eth0 (6) for fe80::478:48ff:fe2a:2201%2#123 Aug 19 00:12:16.560197 ntpd[1971]: 19 Aug 00:12:16 ntpd[1971]: failed to init interface for address fe80::478:48ff:fe2a:2201%2 Aug 19 00:12:16.559785 ntpd[1971]: failed to init interface for address fe80::478:48ff:fe2a:2201%2 Aug 19 00:12:16.563503 containerd[1995]: time="2025-08-19T00:12:16.563409975Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:12:16.574228 unknown[2086]: wrote ssh authorized keys file for user: core Aug 19 00:12:16.772757 update-ssh-keys[2149]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:12:16.775656 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 19 00:12:16.787348 systemd[1]: Finished sshkeys.service. Aug 19 00:12:16.849481 systemd-networkd[1830]: eth0: Gained IPv6LL Aug 19 00:12:16.858977 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 00:12:16.862483 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 00:12:16.869927 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 19 00:12:16.876995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:16.882502 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 00:12:17.116425 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 00:12:17.130459 containerd[1995]: time="2025-08-19T00:12:17.130159250Z" level=info msg="Start subscribing containerd event" Aug 19 00:12:17.130459 containerd[1995]: time="2025-08-19T00:12:17.130273142Z" level=info msg="Start recovering state" Aug 19 00:12:17.130459 containerd[1995]: time="2025-08-19T00:12:17.130441358Z" level=info msg="Start event monitor" Aug 19 00:12:17.130639 containerd[1995]: time="2025-08-19T00:12:17.130467074Z" level=info msg="Start cni network conf syncer for default" Aug 19 00:12:17.130639 containerd[1995]: time="2025-08-19T00:12:17.130484942Z" level=info msg="Start streaming server" Aug 19 00:12:17.130639 containerd[1995]: time="2025-08-19T00:12:17.130505270Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 00:12:17.130639 containerd[1995]: time="2025-08-19T00:12:17.130522286Z" level=info msg="runtime interface starting up..." Aug 19 00:12:17.130639 containerd[1995]: time="2025-08-19T00:12:17.130536878Z" level=info msg="starting plugins..." Aug 19 00:12:17.130639 containerd[1995]: time="2025-08-19T00:12:17.130562774Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 00:12:17.131597 locksmithd[2046]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 00:12:17.132831 containerd[1995]: time="2025-08-19T00:12:17.132783014Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 00:12:17.133053 containerd[1995]: time="2025-08-19T00:12:17.133026638Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 00:12:17.134605 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 00:12:17.138468 containerd[1995]: time="2025-08-19T00:12:17.134657270Z" level=info msg="containerd successfully booted in 0.816413s" Aug 19 00:12:17.181948 polkitd[2133]: Started polkitd version 126 Aug 19 00:12:17.186519 amazon-ssm-agent[2175]: Initializing new seelog logger Aug 19 00:12:17.187394 amazon-ssm-agent[2175]: New Seelog Logger Creation Complete Aug 19 00:12:17.187607 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.187684 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.188421 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 processing appconfig overrides Aug 19 00:12:17.189065 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.190453 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.190453 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 processing appconfig overrides Aug 19 00:12:17.190453 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.190453 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.190453 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 processing appconfig overrides Aug 19 00:12:17.191182 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.1889 INFO Proxy environment variables: Aug 19 00:12:17.195835 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.195988 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:17.196618 amazon-ssm-agent[2175]: 2025/08/19 00:12:17 processing appconfig overrides Aug 19 00:12:17.223043 polkitd[2133]: Loading rules from directory /etc/polkit-1/rules.d Aug 19 00:12:17.225723 polkitd[2133]: Loading rules from directory /run/polkit-1/rules.d Aug 19 00:12:17.225830 polkitd[2133]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 19 00:12:17.227745 polkitd[2133]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 19 00:12:17.227835 polkitd[2133]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 19 00:12:17.227923 polkitd[2133]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 19 00:12:17.231659 polkitd[2133]: Finished loading, compiling and executing 2 rules Aug 19 00:12:17.232100 systemd[1]: Started polkit.service - Authorization Manager. Aug 19 00:12:17.242512 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 19 00:12:17.246849 polkitd[2133]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 19 00:12:17.293024 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.1889 INFO https_proxy: Aug 19 00:12:17.307689 systemd-resolved[1831]: System hostname changed to 'ip-172-31-18-236'. Aug 19 00:12:17.307697 systemd-hostnamed[2026]: Hostname set to (transient) Aug 19 00:12:17.392920 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.1889 INFO http_proxy: Aug 19 00:12:17.490253 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.1889 INFO no_proxy: Aug 19 00:12:17.530984 tar[1986]: linux-arm64/README.md Aug 19 00:12:17.574523 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 00:12:17.590347 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.1893 INFO Checking if agent identity type OnPrem can be assumed Aug 19 00:12:17.688281 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.1894 INFO Checking if agent identity type EC2 can be assumed Aug 19 00:12:17.787488 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3745 INFO Agent will take identity from EC2 Aug 19 00:12:17.887512 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3812 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Aug 19 00:12:17.986370 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3818 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 19 00:12:18.085436 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3818 INFO [amazon-ssm-agent] Starting Core Agent Aug 19 00:12:18.185724 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3818 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Aug 19 00:12:18.287468 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3818 INFO [Registrar] Starting registrar module Aug 19 00:12:18.390363 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3850 INFO [EC2Identity] Checking disk for registration info Aug 19 00:12:18.492345 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3850 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Aug 19 00:12:18.590935 amazon-ssm-agent[2175]: 2025-08-19 00:12:17.3851 INFO [EC2Identity] Generating registration keypair Aug 19 00:12:18.600903 amazon-ssm-agent[2175]: 2025/08/19 00:12:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:18.600903 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 19 00:12:18.601050 amazon-ssm-agent[2175]: 2025/08/19 00:12:18 processing appconfig overrides Aug 19 00:12:18.639169 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.5554 INFO [EC2Identity] Checking write access before registering Aug 19 00:12:18.639169 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.5561 INFO [EC2Identity] Registering EC2 instance with Systems Manager Aug 19 00:12:18.639314 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6006 INFO [EC2Identity] EC2 registration was successful. Aug 19 00:12:18.639314 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6006 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Aug 19 00:12:18.639314 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6007 INFO [CredentialRefresher] credentialRefresher has started Aug 19 00:12:18.639314 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6008 INFO [CredentialRefresher] Starting credentials refresher loop Aug 19 00:12:18.639314 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6386 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 19 00:12:18.639547 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6390 INFO [CredentialRefresher] Credentials ready Aug 19 00:12:18.691838 amazon-ssm-agent[2175]: 2025-08-19 00:12:18.6394 INFO [CredentialRefresher] Next credential rotation will be in 29.9999875958 minutes Aug 19 00:12:18.809318 sshd_keygen[2011]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 00:12:18.853429 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 00:12:18.860932 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 00:12:18.868375 systemd[1]: Started sshd@0-172.31.18.236:22-147.75.109.163:47828.service - OpenSSH per-connection server daemon (147.75.109.163:47828). Aug 19 00:12:18.885194 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 00:12:18.888414 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 00:12:18.893784 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 00:12:18.933650 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 00:12:18.947545 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 00:12:18.956678 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 00:12:18.959698 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 00:12:19.169735 sshd[2221]: Accepted publickey for core from 147.75.109.163 port 47828 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:19.174391 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:19.203426 systemd-logind[1978]: New session 1 of user core. Aug 19 00:12:19.206141 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 00:12:19.210742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 00:12:19.255388 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 00:12:19.264826 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 00:12:19.284611 (systemd)[2233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 00:12:19.289280 systemd-logind[1978]: New session c1 of user core. Aug 19 00:12:19.557923 ntpd[1971]: Listen normally on 7 eth0 [fe80::478:48ff:fe2a:2201%2]:123 Aug 19 00:12:19.559382 ntpd[1971]: 19 Aug 00:12:19 ntpd[1971]: Listen normally on 7 eth0 [fe80::478:48ff:fe2a:2201%2]:123 Aug 19 00:12:19.566581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:19.577365 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 00:12:19.589202 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:19.600921 systemd[2233]: Queued start job for default target default.target. Aug 19 00:12:19.612671 systemd[2233]: Created slice app.slice - User Application Slice. Aug 19 00:12:19.612733 systemd[2233]: Reached target paths.target - Paths. Aug 19 00:12:19.612816 systemd[2233]: Reached target timers.target - Timers. Aug 19 00:12:19.615238 systemd[2233]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 00:12:19.635871 systemd[2233]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 00:12:19.636772 systemd[2233]: Reached target sockets.target - Sockets. Aug 19 00:12:19.636892 systemd[2233]: Reached target basic.target - Basic System. Aug 19 00:12:19.636977 systemd[2233]: Reached target default.target - Main User Target. Aug 19 00:12:19.637038 systemd[2233]: Startup finished in 333ms. Aug 19 00:12:19.637056 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 00:12:19.648295 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 00:12:19.651125 systemd[1]: Startup finished in 3.616s (kernel) + 13.074s (initrd) + 9.951s (userspace) = 26.641s. Aug 19 00:12:19.699857 amazon-ssm-agent[2175]: 2025-08-19 00:12:19.6991 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 19 00:12:19.803122 amazon-ssm-agent[2175]: 2025-08-19 00:12:19.7074 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Aug 19 00:12:19.816930 systemd[1]: Started sshd@1-172.31.18.236:22-147.75.109.163:60846.service - OpenSSH per-connection server daemon (147.75.109.163:60846). Aug 19 00:12:19.903120 amazon-ssm-agent[2175]: 2025-08-19 00:12:19.7074 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 19 00:12:20.045157 sshd[2261]: Accepted publickey for core from 147.75.109.163 port 60846 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:20.047797 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:20.056693 systemd-logind[1978]: New session 2 of user core. Aug 19 00:12:20.065618 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 00:12:20.193451 sshd[2275]: Connection closed by 147.75.109.163 port 60846 Aug 19 00:12:20.194509 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:20.203601 systemd[1]: sshd@1-172.31.18.236:22-147.75.109.163:60846.service: Deactivated successfully. Aug 19 00:12:20.208709 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 00:12:20.210630 systemd-logind[1978]: Session 2 logged out. Waiting for processes to exit. Aug 19 00:12:20.214294 systemd-logind[1978]: Removed session 2. Aug 19 00:12:20.228675 systemd[1]: Started sshd@2-172.31.18.236:22-147.75.109.163:60850.service - OpenSSH per-connection server daemon (147.75.109.163:60850). Aug 19 00:12:20.432591 sshd[2281]: Accepted publickey for core from 147.75.109.163 port 60850 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:20.435043 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:20.446417 systemd-logind[1978]: New session 3 of user core. Aug 19 00:12:20.457619 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 00:12:20.593128 sshd[2284]: Connection closed by 147.75.109.163 port 60850 Aug 19 00:12:20.578612 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:20.615073 systemd[1]: sshd@2-172.31.18.236:22-147.75.109.163:60850.service: Deactivated successfully. Aug 19 00:12:20.618852 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 00:12:20.621641 systemd-logind[1978]: Session 3 logged out. Waiting for processes to exit. Aug 19 00:12:20.626579 systemd[1]: Started sshd@3-172.31.18.236:22-147.75.109.163:60862.service - OpenSSH per-connection server daemon (147.75.109.163:60862). Aug 19 00:12:20.630012 systemd-logind[1978]: Removed session 3. Aug 19 00:12:20.849917 sshd[2290]: Accepted publickey for core from 147.75.109.163 port 60862 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:20.852947 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:20.861786 systemd-logind[1978]: New session 4 of user core. Aug 19 00:12:20.872629 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 00:12:20.988411 kubelet[2244]: E0819 00:12:20.988303 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:20.992875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:20.993385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:20.995449 systemd[1]: kubelet.service: Consumed 1.468s CPU time, 260.9M memory peak. Aug 19 00:12:21.003960 sshd[2293]: Connection closed by 147.75.109.163 port 60862 Aug 19 00:12:21.004638 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:21.010479 systemd[1]: sshd@3-172.31.18.236:22-147.75.109.163:60862.service: Deactivated successfully. Aug 19 00:12:21.014974 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 00:12:21.020997 systemd-logind[1978]: Session 4 logged out. Waiting for processes to exit. Aug 19 00:12:21.022810 systemd-logind[1978]: Removed session 4. Aug 19 00:12:21.040659 systemd[1]: Started sshd@4-172.31.18.236:22-147.75.109.163:60878.service - OpenSSH per-connection server daemon (147.75.109.163:60878). Aug 19 00:12:21.230101 sshd[2300]: Accepted publickey for core from 147.75.109.163 port 60878 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:21.232594 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:21.240668 systemd-logind[1978]: New session 5 of user core. Aug 19 00:12:21.249550 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 00:12:21.397127 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 00:12:21.397798 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:21.414243 sudo[2304]: pam_unix(sudo:session): session closed for user root Aug 19 00:12:21.437415 sshd[2303]: Connection closed by 147.75.109.163 port 60878 Aug 19 00:12:21.438466 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:21.445927 systemd-logind[1978]: Session 5 logged out. Waiting for processes to exit. Aug 19 00:12:21.447289 systemd[1]: sshd@4-172.31.18.236:22-147.75.109.163:60878.service: Deactivated successfully. Aug 19 00:12:21.450317 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 00:12:21.453842 systemd-logind[1978]: Removed session 5. Aug 19 00:12:21.476558 systemd[1]: Started sshd@5-172.31.18.236:22-147.75.109.163:60884.service - OpenSSH per-connection server daemon (147.75.109.163:60884). Aug 19 00:12:21.671110 sshd[2310]: Accepted publickey for core from 147.75.109.163 port 60884 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:21.674054 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:21.681209 systemd-logind[1978]: New session 6 of user core. Aug 19 00:12:21.697581 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 00:12:21.800714 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 00:12:21.801650 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:21.810498 sudo[2315]: pam_unix(sudo:session): session closed for user root Aug 19 00:12:21.819781 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 00:12:21.820945 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:21.837155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:12:21.896868 augenrules[2337]: No rules Aug 19 00:12:21.899179 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:12:21.899758 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:12:21.902249 sudo[2314]: pam_unix(sudo:session): session closed for user root Aug 19 00:12:21.925426 sshd[2313]: Connection closed by 147.75.109.163 port 60884 Aug 19 00:12:21.926758 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Aug 19 00:12:21.933848 systemd[1]: sshd@5-172.31.18.236:22-147.75.109.163:60884.service: Deactivated successfully. Aug 19 00:12:21.937802 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 00:12:21.940765 systemd-logind[1978]: Session 6 logged out. Waiting for processes to exit. Aug 19 00:12:21.943019 systemd-logind[1978]: Removed session 6. Aug 19 00:12:21.960168 systemd[1]: Started sshd@6-172.31.18.236:22-147.75.109.163:60888.service - OpenSSH per-connection server daemon (147.75.109.163:60888). Aug 19 00:12:22.160252 sshd[2346]: Accepted publickey for core from 147.75.109.163 port 60888 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:12:22.163206 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:12:22.172455 systemd-logind[1978]: New session 7 of user core. Aug 19 00:12:22.181628 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 00:12:22.286478 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 00:12:22.287087 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:12:22.119698 systemd-resolved[1831]: Clock change detected. Flushing caches. Aug 19 00:12:22.132021 systemd-journald[1525]: Time jumped backwards, rotating. Aug 19 00:12:22.519248 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 00:12:22.532888 (dockerd)[2368]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 00:12:23.091773 dockerd[2368]: time="2025-08-19T00:12:23.091670268Z" level=info msg="Starting up" Aug 19 00:12:23.093625 dockerd[2368]: time="2025-08-19T00:12:23.093554064Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 00:12:23.113324 dockerd[2368]: time="2025-08-19T00:12:23.113260680Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 00:12:23.139324 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2023504649-merged.mount: Deactivated successfully. Aug 19 00:12:23.206594 dockerd[2368]: time="2025-08-19T00:12:23.206349841Z" level=info msg="Loading containers: start." Aug 19 00:12:23.234175 kernel: Initializing XFRM netlink socket Aug 19 00:12:23.584365 (udev-worker)[2390]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:12:23.657093 systemd-networkd[1830]: docker0: Link UP Aug 19 00:12:23.667848 dockerd[2368]: time="2025-08-19T00:12:23.667683999Z" level=info msg="Loading containers: done." Aug 19 00:12:23.696493 dockerd[2368]: time="2025-08-19T00:12:23.696414087Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 00:12:23.696632 dockerd[2368]: time="2025-08-19T00:12:23.696538791Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 00:12:23.696727 dockerd[2368]: time="2025-08-19T00:12:23.696684927Z" level=info msg="Initializing buildkit" Aug 19 00:12:23.747993 dockerd[2368]: time="2025-08-19T00:12:23.747931648Z" level=info msg="Completed buildkit initialization" Aug 19 00:12:23.763265 dockerd[2368]: time="2025-08-19T00:12:23.763209016Z" level=info msg="Daemon has completed initialization" Aug 19 00:12:23.764205 dockerd[2368]: time="2025-08-19T00:12:23.763465012Z" level=info msg="API listen on /run/docker.sock" Aug 19 00:12:23.763557 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 00:12:24.130761 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3535340431-merged.mount: Deactivated successfully. Aug 19 00:12:25.021609 containerd[1995]: time="2025-08-19T00:12:25.021536510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Aug 19 00:12:25.685288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519400795.mount: Deactivated successfully. Aug 19 00:12:27.085227 containerd[1995]: time="2025-08-19T00:12:27.085117144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:27.088150 containerd[1995]: time="2025-08-19T00:12:27.088024912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Aug 19 00:12:27.090782 containerd[1995]: time="2025-08-19T00:12:27.090697480Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:27.097854 containerd[1995]: time="2025-08-19T00:12:27.097758796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:27.099917 containerd[1995]: time="2025-08-19T00:12:27.099605908Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.077989922s" Aug 19 00:12:27.099917 containerd[1995]: time="2025-08-19T00:12:27.099674680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Aug 19 00:12:27.102488 containerd[1995]: time="2025-08-19T00:12:27.102318328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Aug 19 00:12:28.445510 containerd[1995]: time="2025-08-19T00:12:28.444250207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:28.446086 containerd[1995]: time="2025-08-19T00:12:28.445920955Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Aug 19 00:12:28.447422 containerd[1995]: time="2025-08-19T00:12:28.447348115Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:28.451471 containerd[1995]: time="2025-08-19T00:12:28.451376371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:28.454173 containerd[1995]: time="2025-08-19T00:12:28.453297487Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.350817231s" Aug 19 00:12:28.454173 containerd[1995]: time="2025-08-19T00:12:28.453354871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Aug 19 00:12:28.455063 containerd[1995]: time="2025-08-19T00:12:28.455016895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Aug 19 00:12:29.670505 containerd[1995]: time="2025-08-19T00:12:29.670423389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:29.672224 containerd[1995]: time="2025-08-19T00:12:29.672164505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Aug 19 00:12:29.673589 containerd[1995]: time="2025-08-19T00:12:29.673470453Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:29.679106 containerd[1995]: time="2025-08-19T00:12:29.679000053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:29.681225 containerd[1995]: time="2025-08-19T00:12:29.680775033Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.22570157s" Aug 19 00:12:29.681225 containerd[1995]: time="2025-08-19T00:12:29.680836233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Aug 19 00:12:29.681664 containerd[1995]: time="2025-08-19T00:12:29.681611193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Aug 19 00:12:30.806006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 00:12:30.810472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:31.002049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867934566.mount: Deactivated successfully. Aug 19 00:12:31.199400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:31.209862 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:31.304478 kubelet[2659]: E0819 00:12:31.304339 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:31.313815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:31.314181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:31.314998 systemd[1]: kubelet.service: Consumed 330ms CPU time, 106.9M memory peak. Aug 19 00:12:31.711397 containerd[1995]: time="2025-08-19T00:12:31.711305207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:31.712995 containerd[1995]: time="2025-08-19T00:12:31.712926839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Aug 19 00:12:31.714618 containerd[1995]: time="2025-08-19T00:12:31.714544967Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:31.717778 containerd[1995]: time="2025-08-19T00:12:31.717701831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:31.719773 containerd[1995]: time="2025-08-19T00:12:31.719037155Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 2.037367138s" Aug 19 00:12:31.719773 containerd[1995]: time="2025-08-19T00:12:31.719091527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Aug 19 00:12:31.720050 containerd[1995]: time="2025-08-19T00:12:31.719971475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 19 00:12:32.260708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721085067.mount: Deactivated successfully. Aug 19 00:12:33.971537 containerd[1995]: time="2025-08-19T00:12:33.971216750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:33.988746 containerd[1995]: time="2025-08-19T00:12:33.988654406Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Aug 19 00:12:33.998693 containerd[1995]: time="2025-08-19T00:12:33.998586099Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:34.013278 containerd[1995]: time="2025-08-19T00:12:34.013189319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:34.015479 containerd[1995]: time="2025-08-19T00:12:34.015298847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.295235296s" Aug 19 00:12:34.015479 containerd[1995]: time="2025-08-19T00:12:34.015353627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Aug 19 00:12:34.017815 containerd[1995]: time="2025-08-19T00:12:34.017718887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 00:12:34.480106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759766641.mount: Deactivated successfully. Aug 19 00:12:34.486609 containerd[1995]: time="2025-08-19T00:12:34.486519793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:12:34.488426 containerd[1995]: time="2025-08-19T00:12:34.488363653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 19 00:12:34.489467 containerd[1995]: time="2025-08-19T00:12:34.489409849Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:12:34.494115 containerd[1995]: time="2025-08-19T00:12:34.494036377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:12:34.495582 containerd[1995]: time="2025-08-19T00:12:34.495379729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 477.386234ms" Aug 19 00:12:34.495582 containerd[1995]: time="2025-08-19T00:12:34.495433849Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 19 00:12:34.496474 containerd[1995]: time="2025-08-19T00:12:34.496009105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 19 00:12:34.956566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774119085.mount: Deactivated successfully. Aug 19 00:12:37.003298 containerd[1995]: time="2025-08-19T00:12:37.003233341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:37.005113 containerd[1995]: time="2025-08-19T00:12:37.005048257Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Aug 19 00:12:37.005956 containerd[1995]: time="2025-08-19T00:12:37.005860573Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:37.010892 containerd[1995]: time="2025-08-19T00:12:37.010842157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:12:37.013226 containerd[1995]: time="2025-08-19T00:12:37.013029205Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.516976036s" Aug 19 00:12:37.013226 containerd[1995]: time="2025-08-19T00:12:37.013079737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Aug 19 00:12:41.564762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 00:12:41.569491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:42.016378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:42.030607 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:12:42.108155 kubelet[2807]: E0819 00:12:42.107191 2807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:12:42.111676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:12:42.111998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:12:42.113242 systemd[1]: kubelet.service: Consumed 289ms CPU time, 106.8M memory peak. Aug 19 00:12:46.901228 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 19 00:12:47.489050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:47.489462 systemd[1]: kubelet.service: Consumed 289ms CPU time, 106.8M memory peak. Aug 19 00:12:47.493736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:47.551363 systemd[1]: Reload requested from client PID 2824 ('systemctl') (unit session-7.scope)... Aug 19 00:12:47.551400 systemd[1]: Reloading... Aug 19 00:12:47.757170 zram_generator::config[2871]: No configuration found. Aug 19 00:12:48.211732 systemd[1]: Reloading finished in 659 ms. Aug 19 00:12:48.247594 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 00:12:48.247773 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 00:12:48.249188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:48.249282 systemd[1]: kubelet.service: Consumed 187ms CPU time, 89.3M memory peak. Aug 19 00:12:48.255800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:48.993884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:49.009757 (kubelet)[2928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:12:49.082059 kubelet[2928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:12:49.082059 kubelet[2928]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:12:49.082059 kubelet[2928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:12:49.082617 kubelet[2928]: I0819 00:12:49.082258 2928 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:12:50.963657 kubelet[2928]: I0819 00:12:50.963591 2928 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 19 00:12:50.963657 kubelet[2928]: I0819 00:12:50.963638 2928 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:12:50.964303 kubelet[2928]: I0819 00:12:50.964003 2928 server.go:956] "Client rotation is on, will bootstrap in background" Aug 19 00:12:51.022952 kubelet[2928]: E0819 00:12:51.022368 2928 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 19 00:12:51.024415 kubelet[2928]: I0819 00:12:51.024356 2928 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:12:51.041419 kubelet[2928]: I0819 00:12:51.041376 2928 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:12:51.047554 kubelet[2928]: I0819 00:12:51.047499 2928 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:12:51.048198 kubelet[2928]: I0819 00:12:51.048118 2928 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:12:51.048463 kubelet[2928]: I0819 00:12:51.048199 2928 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-236","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:12:51.048621 kubelet[2928]: I0819 00:12:51.048605 2928 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:12:51.048682 kubelet[2928]: I0819 00:12:51.048627 2928 container_manager_linux.go:303] "Creating device plugin manager" Aug 19 00:12:51.050523 kubelet[2928]: I0819 00:12:51.050475 2928 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:12:51.057057 kubelet[2928]: I0819 00:12:51.056852 2928 kubelet.go:480] "Attempting to sync node with API server" Aug 19 00:12:51.057057 kubelet[2928]: I0819 00:12:51.056902 2928 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:12:51.057057 kubelet[2928]: I0819 00:12:51.056949 2928 kubelet.go:386] "Adding apiserver pod source" Aug 19 00:12:51.057057 kubelet[2928]: I0819 00:12:51.056976 2928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:12:51.061792 kubelet[2928]: I0819 00:12:51.061254 2928 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:12:51.062741 kubelet[2928]: I0819 00:12:51.062707 2928 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 19 00:12:51.063072 kubelet[2928]: W0819 00:12:51.063051 2928 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 00:12:51.068919 kubelet[2928]: I0819 00:12:51.068890 2928 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:12:51.069111 kubelet[2928]: I0819 00:12:51.069092 2928 server.go:1289] "Started kubelet" Aug 19 00:12:51.069533 kubelet[2928]: E0819 00:12:51.069496 2928 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-236&limit=500&resourceVersion=0\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 00:12:51.077421 kubelet[2928]: E0819 00:12:51.077361 2928 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 00:12:51.079600 kubelet[2928]: E0819 00:12:51.077474 2928 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.236:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.236:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-236.185d02acd49d05af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-236,UID:ip-172-31-18-236,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-236,},FirstTimestamp:2025-08-19 00:12:51.069052335 +0000 UTC m=+2.051880371,LastTimestamp:2025-08-19 00:12:51.069052335 +0000 UTC m=+2.051880371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-236,}" Aug 19 00:12:51.082759 kubelet[2928]: I0819 00:12:51.081623 2928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:12:51.084404 kubelet[2928]: I0819 00:12:51.084332 2928 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:12:51.088712 kubelet[2928]: I0819 00:12:51.088664 2928 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:12:51.089628 kubelet[2928]: I0819 00:12:51.089573 2928 server.go:317] "Adding debug handlers to kubelet server" Aug 19 00:12:51.093440 kubelet[2928]: I0819 00:12:51.093333 2928 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:12:51.093810 kubelet[2928]: I0819 00:12:51.093771 2928 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:12:51.094651 kubelet[2928]: I0819 00:12:51.094622 2928 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:12:51.096485 kubelet[2928]: E0819 00:12:51.095559 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-236\" not found" Aug 19 00:12:51.098701 kubelet[2928]: I0819 00:12:51.098658 2928 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:12:51.098936 kubelet[2928]: I0819 00:12:51.098915 2928 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:12:51.099930 kubelet[2928]: E0819 00:12:51.099884 2928 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 00:12:51.100288 kubelet[2928]: E0819 00:12:51.100244 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-236?timeout=10s\": dial tcp 172.31.18.236:6443: connect: connection refused" interval="200ms" Aug 19 00:12:51.105169 kubelet[2928]: I0819 00:12:51.105112 2928 factory.go:223] Registration of the systemd container factory successfully Aug 19 00:12:51.106085 kubelet[2928]: I0819 00:12:51.106044 2928 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:12:51.108489 kubelet[2928]: I0819 00:12:51.108456 2928 factory.go:223] Registration of the containerd container factory successfully Aug 19 00:12:51.130383 kubelet[2928]: I0819 00:12:51.130298 2928 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 19 00:12:51.132625 kubelet[2928]: I0819 00:12:51.132561 2928 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 19 00:12:51.132625 kubelet[2928]: I0819 00:12:51.132609 2928 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 19 00:12:51.132792 kubelet[2928]: I0819 00:12:51.132647 2928 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:12:51.132792 kubelet[2928]: I0819 00:12:51.132663 2928 kubelet.go:2436] "Starting kubelet main sync loop" Aug 19 00:12:51.132792 kubelet[2928]: E0819 00:12:51.132729 2928 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:12:51.137880 kubelet[2928]: E0819 00:12:51.137365 2928 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:12:51.138047 kubelet[2928]: E0819 00:12:51.137996 2928 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 19 00:12:51.153398 kubelet[2928]: I0819 00:12:51.153352 2928 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:12:51.153398 kubelet[2928]: I0819 00:12:51.153387 2928 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:12:51.153664 kubelet[2928]: I0819 00:12:51.153420 2928 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:12:51.158475 kubelet[2928]: I0819 00:12:51.158426 2928 policy_none.go:49] "None policy: Start" Aug 19 00:12:51.158475 kubelet[2928]: I0819 00:12:51.158470 2928 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:12:51.158629 kubelet[2928]: I0819 00:12:51.158495 2928 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:12:51.170313 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 00:12:51.187109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 00:12:51.193576 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 00:12:51.196169 kubelet[2928]: E0819 00:12:51.196050 2928 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-236\" not found" Aug 19 00:12:51.202084 kubelet[2928]: E0819 00:12:51.202048 2928 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 19 00:12:51.202526 kubelet[2928]: I0819 00:12:51.202505 2928 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:12:51.202675 kubelet[2928]: I0819 00:12:51.202625 2928 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:12:51.203756 kubelet[2928]: I0819 00:12:51.203453 2928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:12:51.205195 kubelet[2928]: E0819 00:12:51.204981 2928 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:12:51.205195 kubelet[2928]: E0819 00:12:51.205048 2928 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-236\" not found" Aug 19 00:12:51.256364 systemd[1]: Created slice kubepods-burstable-pod3aa0cefd5f8154698d70a9a1b628a899.slice - libcontainer container kubepods-burstable-pod3aa0cefd5f8154698d70a9a1b628a899.slice. Aug 19 00:12:51.274505 kubelet[2928]: E0819 00:12:51.273898 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:51.278723 systemd[1]: Created slice kubepods-burstable-pod7414e0d11a476ae5f4b21944a7ad131c.slice - libcontainer container kubepods-burstable-pod7414e0d11a476ae5f4b21944a7ad131c.slice. Aug 19 00:12:51.290859 kubelet[2928]: E0819 00:12:51.290803 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:51.294273 systemd[1]: Created slice kubepods-burstable-pod4899a8bf67bddfe44f7b065cd5bac808.slice - libcontainer container kubepods-burstable-pod4899a8bf67bddfe44f7b065cd5bac808.slice. Aug 19 00:12:51.300177 kubelet[2928]: E0819 00:12:51.299928 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:51.300952 kubelet[2928]: E0819 00:12:51.300883 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-236?timeout=10s\": dial tcp 172.31.18.236:6443: connect: connection refused" interval="400ms" Aug 19 00:12:51.305462 kubelet[2928]: I0819 00:12:51.305409 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-236" Aug 19 00:12:51.306158 kubelet[2928]: E0819 00:12:51.306092 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.236:6443/api/v1/nodes\": dial tcp 172.31.18.236:6443: connect: connection refused" node="ip-172-31-18-236" Aug 19 00:12:51.399877 kubelet[2928]: I0819 00:12:51.399798 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:51.399987 kubelet[2928]: I0819 00:12:51.399939 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:51.400040 kubelet[2928]: I0819 00:12:51.400010 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:51.400114 kubelet[2928]: I0819 00:12:51.400070 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:51.400212 kubelet[2928]: I0819 00:12:51.400115 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4899a8bf67bddfe44f7b065cd5bac808-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-236\" (UID: \"4899a8bf67bddfe44f7b065cd5bac808\") " pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:51.400212 kubelet[2928]: I0819 00:12:51.400193 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3aa0cefd5f8154698d70a9a1b628a899-ca-certs\") pod \"kube-apiserver-ip-172-31-18-236\" (UID: \"3aa0cefd5f8154698d70a9a1b628a899\") " pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:51.400311 kubelet[2928]: I0819 00:12:51.400259 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3aa0cefd5f8154698d70a9a1b628a899-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-236\" (UID: \"3aa0cefd5f8154698d70a9a1b628a899\") " pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:51.400311 kubelet[2928]: I0819 00:12:51.400301 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aa0cefd5f8154698d70a9a1b628a899-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-236\" (UID: \"3aa0cefd5f8154698d70a9a1b628a899\") " pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:51.400407 kubelet[2928]: I0819 00:12:51.400363 2928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:51.508976 kubelet[2928]: I0819 00:12:51.508713 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-236" Aug 19 00:12:51.509818 kubelet[2928]: E0819 00:12:51.509748 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.236:6443/api/v1/nodes\": dial tcp 172.31.18.236:6443: connect: connection refused" node="ip-172-31-18-236" Aug 19 00:12:51.575539 containerd[1995]: time="2025-08-19T00:12:51.575478582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-236,Uid:3aa0cefd5f8154698d70a9a1b628a899,Namespace:kube-system,Attempt:0,}" Aug 19 00:12:51.592356 containerd[1995]: time="2025-08-19T00:12:51.592093530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-236,Uid:7414e0d11a476ae5f4b21944a7ad131c,Namespace:kube-system,Attempt:0,}" Aug 19 00:12:51.602390 containerd[1995]: time="2025-08-19T00:12:51.602341938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-236,Uid:4899a8bf67bddfe44f7b065cd5bac808,Namespace:kube-system,Attempt:0,}" Aug 19 00:12:51.629355 containerd[1995]: time="2025-08-19T00:12:51.629289330Z" level=info msg="connecting to shim e250ac44b45ad69b7d6595315e7f14cd62a02196d1a5df8afa1c2b67b57a15c8" address="unix:///run/containerd/s/6c9292358e43cf9fd1c1dc9a23c47a43c1a8352fa59e203364902690990cf693" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:12:51.691474 systemd[1]: Started cri-containerd-e250ac44b45ad69b7d6595315e7f14cd62a02196d1a5df8afa1c2b67b57a15c8.scope - libcontainer container e250ac44b45ad69b7d6595315e7f14cd62a02196d1a5df8afa1c2b67b57a15c8. Aug 19 00:12:51.702101 kubelet[2928]: E0819 00:12:51.702028 2928 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-236?timeout=10s\": dial tcp 172.31.18.236:6443: connect: connection refused" interval="800ms" Aug 19 00:12:51.712348 containerd[1995]: time="2025-08-19T00:12:51.712076034Z" level=info msg="connecting to shim 44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee" address="unix:///run/containerd/s/4f1343c0e9040ce84c1d77dc745f0362199fef1ff4a9070a98b94b0317231be0" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:12:51.721286 containerd[1995]: time="2025-08-19T00:12:51.721211563Z" level=info msg="connecting to shim 58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb" address="unix:///run/containerd/s/f3cf8201308767a7237f2a6b86f57d1c5ee7bf672e99b2fe9e230ebc8c9c661e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:12:51.793453 systemd[1]: Started cri-containerd-44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee.scope - libcontainer container 44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee. Aug 19 00:12:51.805188 systemd[1]: Started cri-containerd-58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb.scope - libcontainer container 58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb. Aug 19 00:12:51.836714 containerd[1995]: time="2025-08-19T00:12:51.836640415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-236,Uid:3aa0cefd5f8154698d70a9a1b628a899,Namespace:kube-system,Attempt:0,} returns sandbox id \"e250ac44b45ad69b7d6595315e7f14cd62a02196d1a5df8afa1c2b67b57a15c8\"" Aug 19 00:12:51.848458 containerd[1995]: time="2025-08-19T00:12:51.848390203Z" level=info msg="CreateContainer within sandbox \"e250ac44b45ad69b7d6595315e7f14cd62a02196d1a5df8afa1c2b67b57a15c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 00:12:51.879154 containerd[1995]: time="2025-08-19T00:12:51.878019619Z" level=info msg="Container e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:12:51.903027 containerd[1995]: time="2025-08-19T00:12:51.902954623Z" level=info msg="CreateContainer within sandbox \"e250ac44b45ad69b7d6595315e7f14cd62a02196d1a5df8afa1c2b67b57a15c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02\"" Aug 19 00:12:51.905971 containerd[1995]: time="2025-08-19T00:12:51.905642275Z" level=info msg="StartContainer for \"e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02\"" Aug 19 00:12:51.906148 kubelet[2928]: E0819 00:12:51.905841 2928 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 00:12:51.909570 containerd[1995]: time="2025-08-19T00:12:51.909457807Z" level=info msg="connecting to shim e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02" address="unix:///run/containerd/s/6c9292358e43cf9fd1c1dc9a23c47a43c1a8352fa59e203364902690990cf693" protocol=ttrpc version=3 Aug 19 00:12:51.915185 kubelet[2928]: I0819 00:12:51.914898 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-236" Aug 19 00:12:51.916502 kubelet[2928]: E0819 00:12:51.916421 2928 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.236:6443/api/v1/nodes\": dial tcp 172.31.18.236:6443: connect: connection refused" node="ip-172-31-18-236" Aug 19 00:12:51.938673 containerd[1995]: time="2025-08-19T00:12:51.938617340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-236,Uid:7414e0d11a476ae5f4b21944a7ad131c,Namespace:kube-system,Attempt:0,} returns sandbox id \"44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee\"" Aug 19 00:12:51.952249 containerd[1995]: time="2025-08-19T00:12:51.952160120Z" level=info msg="CreateContainer within sandbox \"44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 00:12:51.974346 containerd[1995]: time="2025-08-19T00:12:51.974188472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-236,Uid:4899a8bf67bddfe44f7b065cd5bac808,Namespace:kube-system,Attempt:0,} returns sandbox id \"58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb\"" Aug 19 00:12:51.975495 systemd[1]: Started cri-containerd-e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02.scope - libcontainer container e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02. Aug 19 00:12:51.986941 containerd[1995]: time="2025-08-19T00:12:51.986782028Z" level=info msg="Container 5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:12:52.001980 containerd[1995]: time="2025-08-19T00:12:52.001916308Z" level=info msg="CreateContainer within sandbox \"58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 00:12:52.012789 containerd[1995]: time="2025-08-19T00:12:52.012716800Z" level=info msg="CreateContainer within sandbox \"44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\"" Aug 19 00:12:52.014286 containerd[1995]: time="2025-08-19T00:12:52.014091148Z" level=info msg="StartContainer for \"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\"" Aug 19 00:12:52.019301 containerd[1995]: time="2025-08-19T00:12:52.017098540Z" level=info msg="connecting to shim 5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c" address="unix:///run/containerd/s/4f1343c0e9040ce84c1d77dc745f0362199fef1ff4a9070a98b94b0317231be0" protocol=ttrpc version=3 Aug 19 00:12:52.027504 containerd[1995]: time="2025-08-19T00:12:52.027402448Z" level=info msg="Container 579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:12:52.060747 systemd[1]: Started cri-containerd-5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c.scope - libcontainer container 5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c. Aug 19 00:12:52.066108 containerd[1995]: time="2025-08-19T00:12:52.066053848Z" level=info msg="CreateContainer within sandbox \"58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\"" Aug 19 00:12:52.070045 containerd[1995]: time="2025-08-19T00:12:52.069860188Z" level=info msg="StartContainer for \"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\"" Aug 19 00:12:52.076058 containerd[1995]: time="2025-08-19T00:12:52.075969364Z" level=info msg="connecting to shim 579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2" address="unix:///run/containerd/s/f3cf8201308767a7237f2a6b86f57d1c5ee7bf672e99b2fe9e230ebc8c9c661e" protocol=ttrpc version=3 Aug 19 00:12:52.097559 kubelet[2928]: E0819 00:12:52.097459 2928 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.236:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 00:12:52.146447 systemd[1]: Started cri-containerd-579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2.scope - libcontainer container 579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2. Aug 19 00:12:52.151639 containerd[1995]: time="2025-08-19T00:12:52.151576865Z" level=info msg="StartContainer for \"e0e766212c63c0f41956ff8d9c529a7b20b42608b5eb6fec81de25a1980a6a02\" returns successfully" Aug 19 00:12:52.191968 kubelet[2928]: E0819 00:12:52.191930 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:52.206355 containerd[1995]: time="2025-08-19T00:12:52.206290085Z" level=info msg="StartContainer for \"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\" returns successfully" Aug 19 00:12:52.366296 containerd[1995]: time="2025-08-19T00:12:52.365984322Z" level=info msg="StartContainer for \"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\" returns successfully" Aug 19 00:12:52.719786 kubelet[2928]: I0819 00:12:52.719745 2928 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-236" Aug 19 00:12:53.198892 kubelet[2928]: E0819 00:12:53.198838 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:53.205160 kubelet[2928]: E0819 00:12:53.204925 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:53.205310 kubelet[2928]: E0819 00:12:53.205285 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:54.210223 kubelet[2928]: E0819 00:12:54.209455 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:54.210223 kubelet[2928]: E0819 00:12:54.209963 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:55.210958 kubelet[2928]: E0819 00:12:55.210891 2928 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:55.704187 kubelet[2928]: E0819 00:12:55.704112 2928 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-236\" not found" node="ip-172-31-18-236" Aug 19 00:12:55.766362 kubelet[2928]: I0819 00:12:55.766296 2928 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-236" Aug 19 00:12:55.798377 kubelet[2928]: I0819 00:12:55.796228 2928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:55.839914 kubelet[2928]: E0819 00:12:55.839867 2928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:55.841245 kubelet[2928]: I0819 00:12:55.841204 2928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:55.854261 kubelet[2928]: E0819 00:12:55.854211 2928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:55.854454 kubelet[2928]: I0819 00:12:55.854432 2928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:55.861058 kubelet[2928]: E0819 00:12:55.861003 2928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:56.077561 kubelet[2928]: I0819 00:12:56.076785 2928 apiserver.go:52] "Watching apiserver" Aug 19 00:12:56.099616 kubelet[2928]: I0819 00:12:56.099494 2928 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:12:57.670293 kubelet[2928]: I0819 00:12:57.670235 2928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:57.691841 kubelet[2928]: I0819 00:12:57.691780 2928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:57.995078 systemd[1]: Reload requested from client PID 3212 ('systemctl') (unit session-7.scope)... Aug 19 00:12:57.995101 systemd[1]: Reloading... Aug 19 00:12:58.187203 zram_generator::config[3260]: No configuration found. Aug 19 00:12:58.678639 systemd[1]: Reloading finished in 682 ms. Aug 19 00:12:58.731305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:58.749761 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 00:12:58.750400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:58.750564 systemd[1]: kubelet.service: Consumed 2.797s CPU time, 126.5M memory peak. Aug 19 00:12:58.755328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:12:59.102967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:12:59.122709 (kubelet)[3316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:12:59.222150 kubelet[3316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:12:59.222742 kubelet[3316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:12:59.222830 kubelet[3316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:12:59.223186 kubelet[3316]: I0819 00:12:59.223082 3316 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:12:59.240182 kubelet[3316]: I0819 00:12:59.239889 3316 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 19 00:12:59.240182 kubelet[3316]: I0819 00:12:59.239952 3316 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:12:59.241201 kubelet[3316]: I0819 00:12:59.241041 3316 server.go:956] "Client rotation is on, will bootstrap in background" Aug 19 00:12:59.244503 kubelet[3316]: I0819 00:12:59.244463 3316 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 19 00:12:59.250677 kubelet[3316]: I0819 00:12:59.250012 3316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:12:59.268382 kubelet[3316]: I0819 00:12:59.268325 3316 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:12:59.278669 kubelet[3316]: I0819 00:12:59.277896 3316 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:12:59.278669 kubelet[3316]: I0819 00:12:59.278313 3316 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:12:59.278851 kubelet[3316]: I0819 00:12:59.278352 3316 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-236","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:12:59.278851 kubelet[3316]: I0819 00:12:59.278792 3316 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:12:59.278851 kubelet[3316]: I0819 00:12:59.278813 3316 container_manager_linux.go:303] "Creating device plugin manager" Aug 19 00:12:59.279076 kubelet[3316]: I0819 00:12:59.278882 3316 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:12:59.279174 kubelet[3316]: I0819 00:12:59.279115 3316 kubelet.go:480] "Attempting to sync node with API server" Aug 19 00:12:59.279241 kubelet[3316]: I0819 00:12:59.279198 3316 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:12:59.279241 kubelet[3316]: I0819 00:12:59.279251 3316 kubelet.go:386] "Adding apiserver pod source" Aug 19 00:12:59.279241 kubelet[3316]: I0819 00:12:59.279282 3316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:12:59.290873 kubelet[3316]: I0819 00:12:59.290409 3316 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:12:59.293703 kubelet[3316]: I0819 00:12:59.292816 3316 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 19 00:12:59.303284 kubelet[3316]: I0819 00:12:59.302458 3316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:12:59.303284 kubelet[3316]: I0819 00:12:59.302529 3316 server.go:1289] "Started kubelet" Aug 19 00:12:59.308566 kubelet[3316]: I0819 00:12:59.308519 3316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:12:59.317528 kubelet[3316]: I0819 00:12:59.317460 3316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:12:59.319346 kubelet[3316]: I0819 00:12:59.319311 3316 server.go:317] "Adding debug handlers to kubelet server" Aug 19 00:12:59.327106 kubelet[3316]: I0819 00:12:59.326005 3316 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:12:59.327106 kubelet[3316]: E0819 00:12:59.326460 3316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-236\" not found" Aug 19 00:12:59.331201 kubelet[3316]: I0819 00:12:59.330027 3316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:12:59.331201 kubelet[3316]: I0819 00:12:59.330294 3316 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:12:59.334196 kubelet[3316]: I0819 00:12:59.332072 3316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:12:59.336209 kubelet[3316]: I0819 00:12:59.334627 3316 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:12:59.337914 kubelet[3316]: I0819 00:12:59.337667 3316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:12:59.364350 kubelet[3316]: I0819 00:12:59.363612 3316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 19 00:12:59.370537 kubelet[3316]: I0819 00:12:59.370499 3316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 19 00:12:59.370686 kubelet[3316]: I0819 00:12:59.370668 3316 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 19 00:12:59.370816 kubelet[3316]: I0819 00:12:59.370797 3316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:12:59.371189 kubelet[3316]: I0819 00:12:59.370885 3316 kubelet.go:2436] "Starting kubelet main sync loop" Aug 19 00:12:59.371189 kubelet[3316]: E0819 00:12:59.370951 3316 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:12:59.376038 kubelet[3316]: I0819 00:12:59.375994 3316 factory.go:223] Registration of the containerd container factory successfully Aug 19 00:12:59.376038 kubelet[3316]: I0819 00:12:59.376031 3316 factory.go:223] Registration of the systemd container factory successfully Aug 19 00:12:59.376360 kubelet[3316]: I0819 00:12:59.376260 3316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:12:59.399866 kubelet[3316]: E0819 00:12:59.399820 3316 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:12:59.471711 kubelet[3316]: E0819 00:12:59.471661 3316 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 00:12:59.491370 kubelet[3316]: I0819 00:12:59.491313 3316 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:12:59.491370 kubelet[3316]: I0819 00:12:59.491347 3316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:12:59.491548 kubelet[3316]: I0819 00:12:59.491384 3316 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:12:59.491676 kubelet[3316]: I0819 00:12:59.491605 3316 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 00:12:59.491676 kubelet[3316]: I0819 00:12:59.491636 3316 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 00:12:59.491676 kubelet[3316]: I0819 00:12:59.491674 3316 policy_none.go:49] "None policy: Start" Aug 19 00:12:59.491814 kubelet[3316]: I0819 00:12:59.491702 3316 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:12:59.491814 kubelet[3316]: I0819 00:12:59.491723 3316 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:12:59.491913 kubelet[3316]: I0819 00:12:59.491893 3316 state_mem.go:75] "Updated machine memory state" Aug 19 00:12:59.502425 kubelet[3316]: E0819 00:12:59.502377 3316 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 19 00:12:59.504121 kubelet[3316]: I0819 00:12:59.504079 3316 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:12:59.504877 kubelet[3316]: I0819 00:12:59.504382 3316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:12:59.505922 kubelet[3316]: I0819 00:12:59.505878 3316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:12:59.510840 kubelet[3316]: E0819 00:12:59.509744 3316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:12:59.625529 kubelet[3316]: I0819 00:12:59.625064 3316 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-236" Aug 19 00:12:59.641511 kubelet[3316]: I0819 00:12:59.641453 3316 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-236" Aug 19 00:12:59.641646 kubelet[3316]: I0819 00:12:59.641592 3316 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-236" Aug 19 00:12:59.673575 kubelet[3316]: I0819 00:12:59.672941 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:59.673869 kubelet[3316]: I0819 00:12:59.673824 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:59.675870 kubelet[3316]: I0819 00:12:59.675354 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:59.692065 kubelet[3316]: E0819 00:12:59.691826 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-236\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:59.692208 kubelet[3316]: E0819 00:12:59.692110 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-236\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:59.733167 kubelet[3316]: I0819 00:12:59.732064 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4899a8bf67bddfe44f7b065cd5bac808-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-236\" (UID: \"4899a8bf67bddfe44f7b065cd5bac808\") " pod="kube-system/kube-scheduler-ip-172-31-18-236" Aug 19 00:12:59.733167 kubelet[3316]: I0819 00:12:59.732372 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3aa0cefd5f8154698d70a9a1b628a899-ca-certs\") pod \"kube-apiserver-ip-172-31-18-236\" (UID: \"3aa0cefd5f8154698d70a9a1b628a899\") " pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:59.733167 kubelet[3316]: I0819 00:12:59.732515 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3aa0cefd5f8154698d70a9a1b628a899-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-236\" (UID: \"3aa0cefd5f8154698d70a9a1b628a899\") " pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:59.733415 kubelet[3316]: I0819 00:12:59.733288 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:59.733660 kubelet[3316]: I0819 00:12:59.733477 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aa0cefd5f8154698d70a9a1b628a899-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-236\" (UID: \"3aa0cefd5f8154698d70a9a1b628a899\") " pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:12:59.733733 kubelet[3316]: I0819 00:12:59.733618 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:59.733901 kubelet[3316]: I0819 00:12:59.733801 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:59.734015 kubelet[3316]: I0819 00:12:59.733969 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:12:59.734829 kubelet[3316]: I0819 00:12:59.734391 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7414e0d11a476ae5f4b21944a7ad131c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-236\" (UID: \"7414e0d11a476ae5f4b21944a7ad131c\") " pod="kube-system/kube-controller-manager-ip-172-31-18-236" Aug 19 00:13:00.287501 kubelet[3316]: I0819 00:13:00.285509 3316 apiserver.go:52] "Watching apiserver" Aug 19 00:13:00.331218 kubelet[3316]: I0819 00:13:00.331123 3316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:13:00.442624 kubelet[3316]: I0819 00:13:00.442479 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:13:00.466546 kubelet[3316]: E0819 00:13:00.466244 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-236\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-236" Aug 19 00:13:00.510368 kubelet[3316]: I0819 00:13:00.510281 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-236" podStartSLOduration=3.510261614 podStartE2EDuration="3.510261614s" podCreationTimestamp="2025-08-19 00:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:00.510225674 +0000 UTC m=+1.376694488" watchObservedRunningTime="2025-08-19 00:13:00.510261614 +0000 UTC m=+1.376730404" Aug 19 00:13:00.532791 kubelet[3316]: I0819 00:13:00.532708 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-236" podStartSLOduration=3.532605626 podStartE2EDuration="3.532605626s" podCreationTimestamp="2025-08-19 00:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:00.532414862 +0000 UTC m=+1.398883664" watchObservedRunningTime="2025-08-19 00:13:00.532605626 +0000 UTC m=+1.399074428" Aug 19 00:13:00.572351 update_engine[1979]: I20250819 00:13:00.572178 1979 update_attempter.cc:509] Updating boot flags... Aug 19 00:13:00.578539 kubelet[3316]: I0819 00:13:00.578435 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-236" podStartSLOduration=1.578414883 podStartE2EDuration="1.578414883s" podCreationTimestamp="2025-08-19 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:00.554285474 +0000 UTC m=+1.420754288" watchObservedRunningTime="2025-08-19 00:13:00.578414883 +0000 UTC m=+1.444883673" Aug 19 00:13:04.510546 kubelet[3316]: I0819 00:13:04.510506 3316 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 00:13:04.511786 containerd[1995]: time="2025-08-19T00:13:04.511734978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 00:13:04.512952 kubelet[3316]: I0819 00:13:04.512871 3316 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 00:13:05.482313 systemd[1]: Created slice kubepods-besteffort-pod6b352638_95be_4614_bca5_9577a22dd10c.slice - libcontainer container kubepods-besteffort-pod6b352638_95be_4614_bca5_9577a22dd10c.slice. Aug 19 00:13:05.572764 kubelet[3316]: I0819 00:13:05.572351 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b352638-95be-4614-bca5-9577a22dd10c-kube-proxy\") pod \"kube-proxy-xblmh\" (UID: \"6b352638-95be-4614-bca5-9577a22dd10c\") " pod="kube-system/kube-proxy-xblmh" Aug 19 00:13:05.573693 kubelet[3316]: I0819 00:13:05.573525 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b352638-95be-4614-bca5-9577a22dd10c-xtables-lock\") pod \"kube-proxy-xblmh\" (UID: \"6b352638-95be-4614-bca5-9577a22dd10c\") " pod="kube-system/kube-proxy-xblmh" Aug 19 00:13:05.573693 kubelet[3316]: I0819 00:13:05.573588 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b352638-95be-4614-bca5-9577a22dd10c-lib-modules\") pod \"kube-proxy-xblmh\" (UID: \"6b352638-95be-4614-bca5-9577a22dd10c\") " pod="kube-system/kube-proxy-xblmh" Aug 19 00:13:05.573693 kubelet[3316]: I0819 00:13:05.573625 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdv5b\" (UniqueName: \"kubernetes.io/projected/6b352638-95be-4614-bca5-9577a22dd10c-kube-api-access-wdv5b\") pod \"kube-proxy-xblmh\" (UID: \"6b352638-95be-4614-bca5-9577a22dd10c\") " pod="kube-system/kube-proxy-xblmh" Aug 19 00:13:05.736223 systemd[1]: Created slice kubepods-besteffort-pod38e8c82b_53e8_451a_9699_a12325927c91.slice - libcontainer container kubepods-besteffort-pod38e8c82b_53e8_451a_9699_a12325927c91.slice. Aug 19 00:13:05.774418 kubelet[3316]: I0819 00:13:05.774260 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc84t\" (UniqueName: \"kubernetes.io/projected/38e8c82b-53e8-451a-9699-a12325927c91-kube-api-access-qc84t\") pod \"tigera-operator-747864d56d-ktx2k\" (UID: \"38e8c82b-53e8-451a-9699-a12325927c91\") " pod="tigera-operator/tigera-operator-747864d56d-ktx2k" Aug 19 00:13:05.774729 kubelet[3316]: I0819 00:13:05.774335 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38e8c82b-53e8-451a-9699-a12325927c91-var-lib-calico\") pod \"tigera-operator-747864d56d-ktx2k\" (UID: \"38e8c82b-53e8-451a-9699-a12325927c91\") " pod="tigera-operator/tigera-operator-747864d56d-ktx2k" Aug 19 00:13:05.797751 containerd[1995]: time="2025-08-19T00:13:05.797678432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xblmh,Uid:6b352638-95be-4614-bca5-9577a22dd10c,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:05.841495 containerd[1995]: time="2025-08-19T00:13:05.841308105Z" level=info msg="connecting to shim de2cdb1f3cb6604881d0eca886f3a55f6532b43431a560a8c05cded1805cb9b3" address="unix:///run/containerd/s/ecc7cee6aa28e12676a501c8afb9c029436304880f998c2353c143e845bd0e82" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:05.885709 systemd[1]: Started cri-containerd-de2cdb1f3cb6604881d0eca886f3a55f6532b43431a560a8c05cded1805cb9b3.scope - libcontainer container de2cdb1f3cb6604881d0eca886f3a55f6532b43431a560a8c05cded1805cb9b3. Aug 19 00:13:05.948574 containerd[1995]: time="2025-08-19T00:13:05.948512961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xblmh,Uid:6b352638-95be-4614-bca5-9577a22dd10c,Namespace:kube-system,Attempt:0,} returns sandbox id \"de2cdb1f3cb6604881d0eca886f3a55f6532b43431a560a8c05cded1805cb9b3\"" Aug 19 00:13:05.958268 containerd[1995]: time="2025-08-19T00:13:05.958203081Z" level=info msg="CreateContainer within sandbox \"de2cdb1f3cb6604881d0eca886f3a55f6532b43431a560a8c05cded1805cb9b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 00:13:05.978286 containerd[1995]: time="2025-08-19T00:13:05.978214389Z" level=info msg="Container b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:05.995633 containerd[1995]: time="2025-08-19T00:13:05.995369217Z" level=info msg="CreateContainer within sandbox \"de2cdb1f3cb6604881d0eca886f3a55f6532b43431a560a8c05cded1805cb9b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c\"" Aug 19 00:13:05.996940 containerd[1995]: time="2025-08-19T00:13:05.996899973Z" level=info msg="StartContainer for \"b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c\"" Aug 19 00:13:06.000422 containerd[1995]: time="2025-08-19T00:13:06.000302129Z" level=info msg="connecting to shim b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c" address="unix:///run/containerd/s/ecc7cee6aa28e12676a501c8afb9c029436304880f998c2353c143e845bd0e82" protocol=ttrpc version=3 Aug 19 00:13:06.039448 systemd[1]: Started cri-containerd-b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c.scope - libcontainer container b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c. Aug 19 00:13:06.046305 containerd[1995]: time="2025-08-19T00:13:06.046232190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-ktx2k,Uid:38e8c82b-53e8-451a-9699-a12325927c91,Namespace:tigera-operator,Attempt:0,}" Aug 19 00:13:06.100312 containerd[1995]: time="2025-08-19T00:13:06.099883278Z" level=info msg="connecting to shim 8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95" address="unix:///run/containerd/s/b1493f5007efce826089642b618f1bea557183814686d2b0adfeb1f8aab8e70f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:06.150468 containerd[1995]: time="2025-08-19T00:13:06.150298830Z" level=info msg="StartContainer for \"b9bb289d17e526b7b11ec55db9e4231b498d983bfb76d614204a39709f91d26c\" returns successfully" Aug 19 00:13:06.157840 systemd[1]: Started cri-containerd-8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95.scope - libcontainer container 8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95. Aug 19 00:13:06.246250 containerd[1995]: time="2025-08-19T00:13:06.245835919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-ktx2k,Uid:38e8c82b-53e8-451a-9699-a12325927c91,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95\"" Aug 19 00:13:06.253221 containerd[1995]: time="2025-08-19T00:13:06.253089895Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 19 00:13:06.490519 kubelet[3316]: I0819 00:13:06.490423 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xblmh" podStartSLOduration=1.490400804 podStartE2EDuration="1.490400804s" podCreationTimestamp="2025-08-19 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:06.489877088 +0000 UTC m=+7.356345890" watchObservedRunningTime="2025-08-19 00:13:06.490400804 +0000 UTC m=+7.356869582" Aug 19 00:13:07.668220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760688915.mount: Deactivated successfully. Aug 19 00:13:08.732440 containerd[1995]: time="2025-08-19T00:13:08.731575475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:08.733667 containerd[1995]: time="2025-08-19T00:13:08.733433351Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 19 00:13:08.736081 containerd[1995]: time="2025-08-19T00:13:08.736010243Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:08.742587 containerd[1995]: time="2025-08-19T00:13:08.742478099Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:08.744661 containerd[1995]: time="2025-08-19T00:13:08.744173531Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.49083592s" Aug 19 00:13:08.744661 containerd[1995]: time="2025-08-19T00:13:08.744228971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 19 00:13:08.753401 containerd[1995]: time="2025-08-19T00:13:08.753339587Z" level=info msg="CreateContainer within sandbox \"8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 19 00:13:08.773938 containerd[1995]: time="2025-08-19T00:13:08.773871899Z" level=info msg="Container 0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:08.778386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729888870.mount: Deactivated successfully. Aug 19 00:13:08.791086 containerd[1995]: time="2025-08-19T00:13:08.791027003Z" level=info msg="CreateContainer within sandbox \"8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\"" Aug 19 00:13:08.792203 containerd[1995]: time="2025-08-19T00:13:08.792053075Z" level=info msg="StartContainer for \"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\"" Aug 19 00:13:08.795474 containerd[1995]: time="2025-08-19T00:13:08.795414767Z" level=info msg="connecting to shim 0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2" address="unix:///run/containerd/s/b1493f5007efce826089642b618f1bea557183814686d2b0adfeb1f8aab8e70f" protocol=ttrpc version=3 Aug 19 00:13:08.833432 systemd[1]: Started cri-containerd-0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2.scope - libcontainer container 0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2. Aug 19 00:13:08.898709 containerd[1995]: time="2025-08-19T00:13:08.898651704Z" level=info msg="StartContainer for \"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\" returns successfully" Aug 19 00:13:11.366887 kubelet[3316]: I0819 00:13:11.366775 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-ktx2k" podStartSLOduration=3.87106878 podStartE2EDuration="6.366754152s" podCreationTimestamp="2025-08-19 00:13:05 +0000 UTC" firstStartedPulling="2025-08-19 00:13:06.251059471 +0000 UTC m=+7.117528249" lastFinishedPulling="2025-08-19 00:13:08.746744831 +0000 UTC m=+9.613213621" observedRunningTime="2025-08-19 00:13:09.503888867 +0000 UTC m=+10.370357669" watchObservedRunningTime="2025-08-19 00:13:11.366754152 +0000 UTC m=+12.233222966" Aug 19 00:13:16.026435 sudo[2350]: pam_unix(sudo:session): session closed for user root Aug 19 00:13:16.049902 sshd[2349]: Connection closed by 147.75.109.163 port 60888 Aug 19 00:13:16.050797 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:16.063867 systemd[1]: sshd@6-172.31.18.236:22-147.75.109.163:60888.service: Deactivated successfully. Aug 19 00:13:16.076073 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 00:13:16.078142 systemd[1]: session-7.scope: Consumed 14.233s CPU time, 223.9M memory peak. Aug 19 00:13:16.090595 systemd-logind[1978]: Session 7 logged out. Waiting for processes to exit. Aug 19 00:13:16.093235 systemd-logind[1978]: Removed session 7. Aug 19 00:13:29.969991 systemd[1]: Created slice kubepods-besteffort-pod3f98a138_afbd_46f1_bfa2_e0c3ee5b2118.slice - libcontainer container kubepods-besteffort-pod3f98a138_afbd_46f1_bfa2_e0c3ee5b2118.slice. Aug 19 00:13:30.050332 kubelet[3316]: I0819 00:13:30.050274 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f98a138-afbd-46f1-bfa2-e0c3ee5b2118-tigera-ca-bundle\") pod \"calico-typha-646d96cd5b-4nz62\" (UID: \"3f98a138-afbd-46f1-bfa2-e0c3ee5b2118\") " pod="calico-system/calico-typha-646d96cd5b-4nz62" Aug 19 00:13:30.051087 kubelet[3316]: I0819 00:13:30.051048 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3f98a138-afbd-46f1-bfa2-e0c3ee5b2118-typha-certs\") pod \"calico-typha-646d96cd5b-4nz62\" (UID: \"3f98a138-afbd-46f1-bfa2-e0c3ee5b2118\") " pod="calico-system/calico-typha-646d96cd5b-4nz62" Aug 19 00:13:30.051308 kubelet[3316]: I0819 00:13:30.051276 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrlk\" (UniqueName: \"kubernetes.io/projected/3f98a138-afbd-46f1-bfa2-e0c3ee5b2118-kube-api-access-gxrlk\") pod \"calico-typha-646d96cd5b-4nz62\" (UID: \"3f98a138-afbd-46f1-bfa2-e0c3ee5b2118\") " pod="calico-system/calico-typha-646d96cd5b-4nz62" Aug 19 00:13:30.279312 containerd[1995]: time="2025-08-19T00:13:30.278446518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646d96cd5b-4nz62,Uid:3f98a138-afbd-46f1-bfa2-e0c3ee5b2118,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:30.334153 containerd[1995]: time="2025-08-19T00:13:30.333884190Z" level=info msg="connecting to shim 82d9744a3a573a7e7554b596e5ae0c48eee6fd83a9ca12943572fd4c37e39028" address="unix:///run/containerd/s/a2ab0dd1517695efe55047490c43befce5d3313fafdfa4a0cc42020a5fd0923e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:30.388816 systemd[1]: Started cri-containerd-82d9744a3a573a7e7554b596e5ae0c48eee6fd83a9ca12943572fd4c37e39028.scope - libcontainer container 82d9744a3a573a7e7554b596e5ae0c48eee6fd83a9ca12943572fd4c37e39028. Aug 19 00:13:30.485504 systemd[1]: Created slice kubepods-besteffort-pod1f075f6c_1bac_4710_821d_c0ccd8e63540.slice - libcontainer container kubepods-besteffort-pod1f075f6c_1bac_4710_821d_c0ccd8e63540.slice. Aug 19 00:13:30.556335 kubelet[3316]: I0819 00:13:30.556191 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-flexvol-driver-host\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.557613 kubelet[3316]: I0819 00:13:30.557556 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-var-lib-calico\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.557757 kubelet[3316]: I0819 00:13:30.557721 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-cni-bin-dir\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.558259 kubelet[3316]: I0819 00:13:30.558197 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1f075f6c-1bac-4710-821d-c0ccd8e63540-node-certs\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.559297 kubelet[3316]: I0819 00:13:30.558281 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-policysync\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.559297 kubelet[3316]: I0819 00:13:30.558487 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-cni-net-dir\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.559297 kubelet[3316]: I0819 00:13:30.558542 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-cni-log-dir\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.559297 kubelet[3316]: I0819 00:13:30.558578 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f075f6c-1bac-4710-821d-c0ccd8e63540-tigera-ca-bundle\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.559297 kubelet[3316]: I0819 00:13:30.558612 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-xtables-lock\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.560477 kubelet[3316]: I0819 00:13:30.558659 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-lib-modules\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.560477 kubelet[3316]: I0819 00:13:30.558697 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1f075f6c-1bac-4710-821d-c0ccd8e63540-var-run-calico\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.560477 kubelet[3316]: I0819 00:13:30.558731 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvb4\" (UniqueName: \"kubernetes.io/projected/1f075f6c-1bac-4710-821d-c0ccd8e63540-kube-api-access-9tvb4\") pod \"calico-node-z4bjt\" (UID: \"1f075f6c-1bac-4710-821d-c0ccd8e63540\") " pod="calico-system/calico-node-z4bjt" Aug 19 00:13:30.601014 containerd[1995]: time="2025-08-19T00:13:30.600898508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646d96cd5b-4nz62,Uid:3f98a138-afbd-46f1-bfa2-e0c3ee5b2118,Namespace:calico-system,Attempt:0,} returns sandbox id \"82d9744a3a573a7e7554b596e5ae0c48eee6fd83a9ca12943572fd4c37e39028\"" Aug 19 00:13:30.608542 containerd[1995]: time="2025-08-19T00:13:30.608475620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 19 00:13:30.670783 kubelet[3316]: E0819 00:13:30.670193 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.670783 kubelet[3316]: W0819 00:13:30.670258 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.670783 kubelet[3316]: E0819 00:13:30.670294 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.680160 kubelet[3316]: E0819 00:13:30.680025 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.680620 kubelet[3316]: W0819 00:13:30.680428 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.680620 kubelet[3316]: E0819 00:13:30.680473 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.715484 kubelet[3316]: E0819 00:13:30.715430 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.715646 kubelet[3316]: W0819 00:13:30.715489 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.715646 kubelet[3316]: E0819 00:13:30.715523 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.785540 kubelet[3316]: E0819 00:13:30.785227 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:30.796365 containerd[1995]: time="2025-08-19T00:13:30.796288557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4bjt,Uid:1f075f6c-1bac-4710-821d-c0ccd8e63540,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:30.802283 kubelet[3316]: I0819 00:13:30.802221 3316 status_manager.go:895] "Failed to get status for pod" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" pod="calico-system/csi-node-driver-wwgr9" err="pods \"csi-node-driver-wwgr9\" is forbidden: User \"system:node:ip-172-31-18-236\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-18-236' and this object" Aug 19 00:13:30.822698 kubelet[3316]: E0819 00:13:30.820599 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.822698 kubelet[3316]: W0819 00:13:30.820643 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.822698 kubelet[3316]: E0819 00:13:30.820707 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.822698 kubelet[3316]: E0819 00:13:30.822467 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.822698 kubelet[3316]: W0819 00:13:30.822518 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.822698 kubelet[3316]: E0819 00:13:30.822617 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.823985 kubelet[3316]: E0819 00:13:30.823105 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.823985 kubelet[3316]: W0819 00:13:30.823145 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.823985 kubelet[3316]: E0819 00:13:30.823175 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.823985 kubelet[3316]: E0819 00:13:30.823907 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.823985 kubelet[3316]: W0819 00:13:30.823966 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.824567 kubelet[3316]: E0819 00:13:30.824000 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.825537 kubelet[3316]: E0819 00:13:30.825482 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.825771 kubelet[3316]: W0819 00:13:30.825522 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.825845 kubelet[3316]: E0819 00:13:30.825776 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.828261 kubelet[3316]: E0819 00:13:30.828203 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.828261 kubelet[3316]: W0819 00:13:30.828245 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.828451 kubelet[3316]: E0819 00:13:30.828279 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.830849 kubelet[3316]: E0819 00:13:30.830795 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.830966 kubelet[3316]: W0819 00:13:30.830836 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.830966 kubelet[3316]: E0819 00:13:30.830897 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.832515 kubelet[3316]: E0819 00:13:30.832454 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.832515 kubelet[3316]: W0819 00:13:30.832504 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.832692 kubelet[3316]: E0819 00:13:30.832538 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.835542 kubelet[3316]: E0819 00:13:30.835487 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.835542 kubelet[3316]: W0819 00:13:30.835527 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.835730 kubelet[3316]: E0819 00:13:30.835560 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.836600 kubelet[3316]: E0819 00:13:30.836544 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.836600 kubelet[3316]: W0819 00:13:30.836585 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.836766 kubelet[3316]: E0819 00:13:30.836616 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.841324 kubelet[3316]: E0819 00:13:30.841250 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.841324 kubelet[3316]: W0819 00:13:30.841309 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.841512 kubelet[3316]: E0819 00:13:30.841345 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.843170 kubelet[3316]: E0819 00:13:30.841742 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.843170 kubelet[3316]: W0819 00:13:30.841773 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.843170 kubelet[3316]: E0819 00:13:30.841798 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.843170 kubelet[3316]: E0819 00:13:30.842822 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.843170 kubelet[3316]: W0819 00:13:30.842851 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.843170 kubelet[3316]: E0819 00:13:30.842882 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.843695 kubelet[3316]: E0819 00:13:30.843650 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.843695 kubelet[3316]: W0819 00:13:30.843689 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.843950 kubelet[3316]: E0819 00:13:30.843874 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.845347 kubelet[3316]: E0819 00:13:30.844915 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.845347 kubelet[3316]: W0819 00:13:30.844954 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.845347 kubelet[3316]: E0819 00:13:30.844987 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.846726 kubelet[3316]: E0819 00:13:30.845997 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.846726 kubelet[3316]: W0819 00:13:30.846023 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.846726 kubelet[3316]: E0819 00:13:30.846051 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.846726 kubelet[3316]: E0819 00:13:30.846709 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.846936 kubelet[3316]: W0819 00:13:30.846732 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.846936 kubelet[3316]: E0819 00:13:30.846759 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.847341 kubelet[3316]: E0819 00:13:30.847048 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.847341 kubelet[3316]: W0819 00:13:30.847076 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.847341 kubelet[3316]: E0819 00:13:30.847098 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.847521 kubelet[3316]: E0819 00:13:30.847405 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.847521 kubelet[3316]: W0819 00:13:30.847425 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.847521 kubelet[3316]: E0819 00:13:30.847445 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.847881 kubelet[3316]: E0819 00:13:30.847864 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.847938 kubelet[3316]: W0819 00:13:30.847883 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.847938 kubelet[3316]: E0819 00:13:30.847905 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.863820 containerd[1995]: time="2025-08-19T00:13:30.863735865Z" level=info msg="connecting to shim 05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a" address="unix:///run/containerd/s/1a6f8c19ca71b334a4397b14a98afeff649b4356ecfc6b05bcf81689830e3c4d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:30.866055 kubelet[3316]: E0819 00:13:30.865923 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.866055 kubelet[3316]: W0819 00:13:30.865964 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.866055 kubelet[3316]: E0819 00:13:30.865998 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.866055 kubelet[3316]: I0819 00:13:30.866053 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3a3c5b61-646c-40bb-911b-c7ba6082008f-socket-dir\") pod \"csi-node-driver-wwgr9\" (UID: \"3a3c5b61-646c-40bb-911b-c7ba6082008f\") " pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:30.868162 kubelet[3316]: E0819 00:13:30.868097 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.868322 kubelet[3316]: W0819 00:13:30.868190 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.868322 kubelet[3316]: E0819 00:13:30.868227 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.868322 kubelet[3316]: I0819 00:13:30.868286 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7qx\" (UniqueName: \"kubernetes.io/projected/3a3c5b61-646c-40bb-911b-c7ba6082008f-kube-api-access-jt7qx\") pod \"csi-node-driver-wwgr9\" (UID: \"3a3c5b61-646c-40bb-911b-c7ba6082008f\") " pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:30.869004 kubelet[3316]: E0819 00:13:30.868952 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.869004 kubelet[3316]: W0819 00:13:30.868988 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.869156 kubelet[3316]: E0819 00:13:30.869020 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.869742 kubelet[3316]: I0819 00:13:30.869686 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a3c5b61-646c-40bb-911b-c7ba6082008f-kubelet-dir\") pod \"csi-node-driver-wwgr9\" (UID: \"3a3c5b61-646c-40bb-911b-c7ba6082008f\") " pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:30.870436 kubelet[3316]: E0819 00:13:30.870382 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.870554 kubelet[3316]: W0819 00:13:30.870420 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.870554 kubelet[3316]: E0819 00:13:30.870474 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.872309 kubelet[3316]: E0819 00:13:30.872254 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.872309 kubelet[3316]: W0819 00:13:30.872294 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.872521 kubelet[3316]: E0819 00:13:30.872328 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.873650 kubelet[3316]: E0819 00:13:30.873604 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.873650 kubelet[3316]: W0819 00:13:30.873643 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.873802 kubelet[3316]: E0819 00:13:30.873676 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.874037 kubelet[3316]: I0819 00:13:30.873926 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3a3c5b61-646c-40bb-911b-c7ba6082008f-varrun\") pod \"csi-node-driver-wwgr9\" (UID: \"3a3c5b61-646c-40bb-911b-c7ba6082008f\") " pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:30.875692 kubelet[3316]: E0819 00:13:30.875624 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.875692 kubelet[3316]: W0819 00:13:30.875665 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.875870 kubelet[3316]: E0819 00:13:30.875745 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.876298 kubelet[3316]: E0819 00:13:30.876247 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.876298 kubelet[3316]: W0819 00:13:30.876289 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.876455 kubelet[3316]: E0819 00:13:30.876318 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.877059 kubelet[3316]: E0819 00:13:30.876984 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.877059 kubelet[3316]: W0819 00:13:30.877021 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.877059 kubelet[3316]: E0819 00:13:30.877055 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.878462 kubelet[3316]: E0819 00:13:30.878414 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.878462 kubelet[3316]: W0819 00:13:30.878452 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.878462 kubelet[3316]: E0819 00:13:30.878484 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.881241 kubelet[3316]: E0819 00:13:30.879120 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.881241 kubelet[3316]: W0819 00:13:30.881241 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.881448 kubelet[3316]: E0819 00:13:30.881308 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.881857 kubelet[3316]: E0819 00:13:30.881821 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.881857 kubelet[3316]: W0819 00:13:30.881850 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.882374 kubelet[3316]: E0819 00:13:30.881874 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.882374 kubelet[3316]: I0819 00:13:30.881931 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3a3c5b61-646c-40bb-911b-c7ba6082008f-registration-dir\") pod \"csi-node-driver-wwgr9\" (UID: \"3a3c5b61-646c-40bb-911b-c7ba6082008f\") " pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:30.883268 kubelet[3316]: E0819 00:13:30.882495 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.883268 kubelet[3316]: W0819 00:13:30.882521 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.883268 kubelet[3316]: E0819 00:13:30.882548 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.883775 kubelet[3316]: E0819 00:13:30.883729 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.883775 kubelet[3316]: W0819 00:13:30.883768 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.883909 kubelet[3316]: E0819 00:13:30.883803 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.884582 kubelet[3316]: E0819 00:13:30.884517 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.884582 kubelet[3316]: W0819 00:13:30.884563 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.884932 kubelet[3316]: E0819 00:13:30.884592 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.961487 systemd[1]: Started cri-containerd-05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a.scope - libcontainer container 05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a. Aug 19 00:13:30.985514 kubelet[3316]: E0819 00:13:30.985447 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.985514 kubelet[3316]: W0819 00:13:30.985487 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.985693 kubelet[3316]: E0819 00:13:30.985545 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.986559 kubelet[3316]: E0819 00:13:30.986510 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.986559 kubelet[3316]: W0819 00:13:30.986547 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.986736 kubelet[3316]: E0819 00:13:30.986603 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.987658 kubelet[3316]: E0819 00:13:30.987488 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.987658 kubelet[3316]: W0819 00:13:30.987648 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.987806 kubelet[3316]: E0819 00:13:30.987682 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.988914 kubelet[3316]: E0819 00:13:30.988848 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.989027 kubelet[3316]: W0819 00:13:30.988888 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.989114 kubelet[3316]: E0819 00:13:30.989046 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.989875 kubelet[3316]: E0819 00:13:30.989827 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.989875 kubelet[3316]: W0819 00:13:30.989864 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.990079 kubelet[3316]: E0819 00:13:30.989896 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.991266 kubelet[3316]: E0819 00:13:30.991120 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.991266 kubelet[3316]: W0819 00:13:30.991260 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.991479 kubelet[3316]: E0819 00:13:30.991442 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.993201 kubelet[3316]: E0819 00:13:30.993109 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.993201 kubelet[3316]: W0819 00:13:30.993178 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.993520 kubelet[3316]: E0819 00:13:30.993212 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.994158 kubelet[3316]: E0819 00:13:30.994099 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.994364 kubelet[3316]: W0819 00:13:30.994320 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.994468 kubelet[3316]: E0819 00:13:30.994374 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.995054 kubelet[3316]: E0819 00:13:30.995010 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.995054 kubelet[3316]: W0819 00:13:30.995045 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.995320 kubelet[3316]: E0819 00:13:30.995077 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.996242 kubelet[3316]: E0819 00:13:30.996191 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.996242 kubelet[3316]: W0819 00:13:30.996231 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.996440 kubelet[3316]: E0819 00:13:30.996264 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.997592 kubelet[3316]: E0819 00:13:30.997549 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.997592 kubelet[3316]: W0819 00:13:30.997586 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.997706 kubelet[3316]: E0819 00:13:30.997619 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:30.998086 kubelet[3316]: E0819 00:13:30.998046 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:30.998086 kubelet[3316]: W0819 00:13:30.998077 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:30.998326 kubelet[3316]: E0819 00:13:30.998102 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.000171 kubelet[3316]: E0819 00:13:30.998955 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.000171 kubelet[3316]: W0819 00:13:30.998987 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.000171 kubelet[3316]: E0819 00:13:30.999018 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.000786 kubelet[3316]: E0819 00:13:31.000754 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.001122 kubelet[3316]: W0819 00:13:31.000893 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.001122 kubelet[3316]: E0819 00:13:31.000932 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.001797 kubelet[3316]: E0819 00:13:31.001668 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.002032 kubelet[3316]: W0819 00:13:31.002003 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.003022 kubelet[3316]: E0819 00:13:31.002253 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.003968 kubelet[3316]: E0819 00:13:31.003724 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.003968 kubelet[3316]: W0819 00:13:31.003755 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.003968 kubelet[3316]: E0819 00:13:31.003784 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.005937 kubelet[3316]: E0819 00:13:31.005319 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.006145 kubelet[3316]: W0819 00:13:31.006092 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.006287 kubelet[3316]: E0819 00:13:31.006263 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.007495 kubelet[3316]: E0819 00:13:31.006904 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.007495 kubelet[3316]: W0819 00:13:31.006936 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.007495 kubelet[3316]: E0819 00:13:31.006971 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.008537 kubelet[3316]: E0819 00:13:31.008501 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.008705 kubelet[3316]: W0819 00:13:31.008678 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.008978 kubelet[3316]: E0819 00:13:31.008951 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.010185 kubelet[3316]: E0819 00:13:31.009811 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.010185 kubelet[3316]: W0819 00:13:31.009844 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.010185 kubelet[3316]: E0819 00:13:31.009875 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.012298 kubelet[3316]: E0819 00:13:31.012230 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.012298 kubelet[3316]: W0819 00:13:31.012282 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.012463 kubelet[3316]: E0819 00:13:31.012316 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.015370 kubelet[3316]: E0819 00:13:31.015310 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.015370 kubelet[3316]: W0819 00:13:31.015353 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.015571 kubelet[3316]: E0819 00:13:31.015405 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.016947 kubelet[3316]: E0819 00:13:31.016893 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.016947 kubelet[3316]: W0819 00:13:31.016933 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.017204 kubelet[3316]: E0819 00:13:31.016966 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.019810 kubelet[3316]: E0819 00:13:31.019744 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.019810 kubelet[3316]: W0819 00:13:31.019786 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.020573 kubelet[3316]: E0819 00:13:31.019820 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.020912 kubelet[3316]: E0819 00:13:31.020801 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.020912 kubelet[3316]: W0819 00:13:31.020837 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.020912 kubelet[3316]: E0819 00:13:31.020869 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.048369 kubelet[3316]: E0819 00:13:31.048227 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:31.048369 kubelet[3316]: W0819 00:13:31.048288 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:31.048369 kubelet[3316]: E0819 00:13:31.048319 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:31.166711 containerd[1995]: time="2025-08-19T00:13:31.166657074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4bjt,Uid:1f075f6c-1bac-4710-821d-c0ccd8e63540,Namespace:calico-system,Attempt:0,} returns sandbox id \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\"" Aug 19 00:13:31.996396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354460945.mount: Deactivated successfully. Aug 19 00:13:32.376076 kubelet[3316]: E0819 00:13:32.375789 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:33.473628 containerd[1995]: time="2025-08-19T00:13:33.473574214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:33.477301 containerd[1995]: time="2025-08-19T00:13:33.477056614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 19 00:13:33.477864 containerd[1995]: time="2025-08-19T00:13:33.477595630Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:33.485188 containerd[1995]: time="2025-08-19T00:13:33.484817446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:33.488227 containerd[1995]: time="2025-08-19T00:13:33.488174098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.879504018s" Aug 19 00:13:33.488505 containerd[1995]: time="2025-08-19T00:13:33.488363914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 19 00:13:33.492465 containerd[1995]: time="2025-08-19T00:13:33.492387070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 19 00:13:33.524662 containerd[1995]: time="2025-08-19T00:13:33.524590714Z" level=info msg="CreateContainer within sandbox \"82d9744a3a573a7e7554b596e5ae0c48eee6fd83a9ca12943572fd4c37e39028\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 19 00:13:33.540233 containerd[1995]: time="2025-08-19T00:13:33.538840270Z" level=info msg="Container 2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:33.546917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848671535.mount: Deactivated successfully. Aug 19 00:13:33.559878 containerd[1995]: time="2025-08-19T00:13:33.559800778Z" level=info msg="CreateContainer within sandbox \"82d9744a3a573a7e7554b596e5ae0c48eee6fd83a9ca12943572fd4c37e39028\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10\"" Aug 19 00:13:33.561371 containerd[1995]: time="2025-08-19T00:13:33.561310114Z" level=info msg="StartContainer for \"2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10\"" Aug 19 00:13:33.564010 containerd[1995]: time="2025-08-19T00:13:33.563906806Z" level=info msg="connecting to shim 2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10" address="unix:///run/containerd/s/a2ab0dd1517695efe55047490c43befce5d3313fafdfa4a0cc42020a5fd0923e" protocol=ttrpc version=3 Aug 19 00:13:33.610723 systemd[1]: Started cri-containerd-2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10.scope - libcontainer container 2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10. Aug 19 00:13:33.698310 containerd[1995]: time="2025-08-19T00:13:33.698264927Z" level=info msg="StartContainer for \"2c88c364129ecbcf43e51dab93b9896b9f4c03714242c7510a6e1758f175bb10\" returns successfully" Aug 19 00:13:34.371803 kubelet[3316]: E0819 00:13:34.371722 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:34.686766 kubelet[3316]: E0819 00:13:34.686637 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.686766 kubelet[3316]: W0819 00:13:34.686694 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.686766 kubelet[3316]: E0819 00:13:34.686727 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.687621 kubelet[3316]: E0819 00:13:34.687389 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.687621 kubelet[3316]: W0819 00:13:34.687412 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.687621 kubelet[3316]: E0819 00:13:34.687475 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.688101 kubelet[3316]: E0819 00:13:34.688072 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.688269 kubelet[3316]: W0819 00:13:34.688100 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.688269 kubelet[3316]: E0819 00:13:34.688146 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.690246 kubelet[3316]: E0819 00:13:34.690197 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.690338 kubelet[3316]: W0819 00:13:34.690260 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.690338 kubelet[3316]: E0819 00:13:34.690293 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.691809 kubelet[3316]: E0819 00:13:34.691749 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.691809 kubelet[3316]: W0819 00:13:34.691797 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.691984 kubelet[3316]: E0819 00:13:34.691829 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.692396 kubelet[3316]: E0819 00:13:34.692357 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.692396 kubelet[3316]: W0819 00:13:34.692390 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.692716 kubelet[3316]: E0819 00:13:34.692416 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.693678 kubelet[3316]: E0819 00:13:34.693623 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.693678 kubelet[3316]: W0819 00:13:34.693673 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.694108 kubelet[3316]: E0819 00:13:34.693707 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.694343 kubelet[3316]: E0819 00:13:34.694302 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.694343 kubelet[3316]: W0819 00:13:34.694325 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.694466 kubelet[3316]: E0819 00:13:34.694355 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.696155 kubelet[3316]: E0819 00:13:34.695841 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.696155 kubelet[3316]: W0819 00:13:34.695879 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.696155 kubelet[3316]: E0819 00:13:34.695910 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.696775 kubelet[3316]: I0819 00:13:34.696485 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-646d96cd5b-4nz62" podStartSLOduration=2.813171526 podStartE2EDuration="5.696462312s" podCreationTimestamp="2025-08-19 00:13:29 +0000 UTC" firstStartedPulling="2025-08-19 00:13:30.606482264 +0000 UTC m=+31.472951054" lastFinishedPulling="2025-08-19 00:13:33.48977305 +0000 UTC m=+34.356241840" observedRunningTime="2025-08-19 00:13:34.666481092 +0000 UTC m=+35.532949966" watchObservedRunningTime="2025-08-19 00:13:34.696462312 +0000 UTC m=+35.562931258" Aug 19 00:13:34.697058 kubelet[3316]: E0819 00:13:34.696896 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.697613 kubelet[3316]: W0819 00:13:34.697053 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.697613 kubelet[3316]: E0819 00:13:34.697084 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.698556 kubelet[3316]: E0819 00:13:34.698507 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.698556 kubelet[3316]: W0819 00:13:34.698545 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.698746 kubelet[3316]: E0819 00:13:34.698577 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.699503 kubelet[3316]: E0819 00:13:34.699313 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.699503 kubelet[3316]: W0819 00:13:34.699494 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.699692 kubelet[3316]: E0819 00:13:34.699624 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.702503 kubelet[3316]: E0819 00:13:34.702435 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.702503 kubelet[3316]: W0819 00:13:34.702490 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.702684 kubelet[3316]: E0819 00:13:34.702524 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.704300 kubelet[3316]: E0819 00:13:34.704113 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.704598 kubelet[3316]: W0819 00:13:34.704449 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.704598 kubelet[3316]: E0819 00:13:34.704488 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.705678 kubelet[3316]: E0819 00:13:34.705626 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.705678 kubelet[3316]: W0819 00:13:34.705663 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.706312 kubelet[3316]: E0819 00:13:34.705695 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.739308 kubelet[3316]: E0819 00:13:34.739227 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.739308 kubelet[3316]: W0819 00:13:34.739295 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.739501 kubelet[3316]: E0819 00:13:34.739357 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.741007 kubelet[3316]: E0819 00:13:34.740934 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.741007 kubelet[3316]: W0819 00:13:34.740994 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.741561 kubelet[3316]: E0819 00:13:34.741028 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.743088 kubelet[3316]: E0819 00:13:34.742956 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.743088 kubelet[3316]: W0819 00:13:34.742995 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.743357 kubelet[3316]: E0819 00:13:34.743243 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.744466 kubelet[3316]: E0819 00:13:34.744416 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.744466 kubelet[3316]: W0819 00:13:34.744454 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.744466 kubelet[3316]: E0819 00:13:34.744487 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.746655 kubelet[3316]: E0819 00:13:34.746603 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.746655 kubelet[3316]: W0819 00:13:34.746642 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.747400 kubelet[3316]: E0819 00:13:34.746676 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.748508 kubelet[3316]: E0819 00:13:34.748413 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.748508 kubelet[3316]: W0819 00:13:34.748451 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.748508 kubelet[3316]: E0819 00:13:34.748485 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.750776 kubelet[3316]: E0819 00:13:34.750722 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.750776 kubelet[3316]: W0819 00:13:34.750761 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.752412 kubelet[3316]: E0819 00:13:34.750795 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.752814 kubelet[3316]: E0819 00:13:34.752701 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.752814 kubelet[3316]: W0819 00:13:34.752739 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.752814 kubelet[3316]: E0819 00:13:34.752773 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.754415 kubelet[3316]: E0819 00:13:34.754297 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.754415 kubelet[3316]: W0819 00:13:34.754336 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.754415 kubelet[3316]: E0819 00:13:34.754370 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.757787 kubelet[3316]: E0819 00:13:34.757717 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.757787 kubelet[3316]: W0819 00:13:34.757755 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.757787 kubelet[3316]: E0819 00:13:34.757788 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.758643 kubelet[3316]: E0819 00:13:34.758600 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.758643 kubelet[3316]: W0819 00:13:34.758635 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.759484 kubelet[3316]: E0819 00:13:34.758666 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.760389 kubelet[3316]: E0819 00:13:34.759674 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.760389 kubelet[3316]: W0819 00:13:34.759702 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.760389 kubelet[3316]: E0819 00:13:34.759734 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.762277 kubelet[3316]: E0819 00:13:34.760993 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.762277 kubelet[3316]: W0819 00:13:34.761020 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.762277 kubelet[3316]: E0819 00:13:34.761049 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.762277 kubelet[3316]: E0819 00:13:34.762037 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.762277 kubelet[3316]: W0819 00:13:34.762062 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.763532 kubelet[3316]: E0819 00:13:34.762534 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.765692 kubelet[3316]: E0819 00:13:34.764848 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.765692 kubelet[3316]: W0819 00:13:34.764888 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.765692 kubelet[3316]: E0819 00:13:34.764921 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.767071 kubelet[3316]: E0819 00:13:34.766386 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.767071 kubelet[3316]: W0819 00:13:34.766426 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.767071 kubelet[3316]: E0819 00:13:34.766458 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.769076 kubelet[3316]: E0819 00:13:34.769029 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.770083 kubelet[3316]: W0819 00:13:34.769565 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.770083 kubelet[3316]: E0819 00:13:34.769811 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.772084 kubelet[3316]: E0819 00:13:34.771486 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 00:13:34.772084 kubelet[3316]: W0819 00:13:34.771526 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 00:13:34.772084 kubelet[3316]: E0819 00:13:34.771560 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 00:13:34.981848 containerd[1995]: time="2025-08-19T00:13:34.981680773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:34.985611 containerd[1995]: time="2025-08-19T00:13:34.985196281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 19 00:13:34.990160 containerd[1995]: time="2025-08-19T00:13:34.990078589Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:34.992422 containerd[1995]: time="2025-08-19T00:13:34.992348377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:34.993643 containerd[1995]: time="2025-08-19T00:13:34.993418813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.500747931s" Aug 19 00:13:34.993643 containerd[1995]: time="2025-08-19T00:13:34.993477721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 19 00:13:35.002767 containerd[1995]: time="2025-08-19T00:13:35.002641654Z" level=info msg="CreateContainer within sandbox \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 19 00:13:35.024451 containerd[1995]: time="2025-08-19T00:13:35.024379894Z" level=info msg="Container ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:35.048363 containerd[1995]: time="2025-08-19T00:13:35.048281770Z" level=info msg="CreateContainer within sandbox \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\"" Aug 19 00:13:35.049649 containerd[1995]: time="2025-08-19T00:13:35.049402450Z" level=info msg="StartContainer for \"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\"" Aug 19 00:13:35.053876 containerd[1995]: time="2025-08-19T00:13:35.053820994Z" level=info msg="connecting to shim ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6" address="unix:///run/containerd/s/1a6f8c19ca71b334a4397b14a98afeff649b4356ecfc6b05bcf81689830e3c4d" protocol=ttrpc version=3 Aug 19 00:13:35.093436 systemd[1]: Started cri-containerd-ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6.scope - libcontainer container ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6. Aug 19 00:13:35.178628 containerd[1995]: time="2025-08-19T00:13:35.178566238Z" level=info msg="StartContainer for \"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\" returns successfully" Aug 19 00:13:35.221952 systemd[1]: cri-containerd-ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6.scope: Deactivated successfully. Aug 19 00:13:35.236505 containerd[1995]: time="2025-08-19T00:13:35.236299811Z" level=info msg="received exit event container_id:\"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\" id:\"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\" pid:4095 exited_at:{seconds:1755562415 nanos:235560227}" Aug 19 00:13:35.238841 containerd[1995]: time="2025-08-19T00:13:35.238779311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\" id:\"ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6\" pid:4095 exited_at:{seconds:1755562415 nanos:235560227}" Aug 19 00:13:35.317494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee1bbd9a8bf2b114ed4f9033fcf40eb2229e7471b6ff6c25501ff4b1e0d7bfc6-rootfs.mount: Deactivated successfully. Aug 19 00:13:36.372679 kubelet[3316]: E0819 00:13:36.372551 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:36.627947 containerd[1995]: time="2025-08-19T00:13:36.627787142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 19 00:13:38.371865 kubelet[3316]: E0819 00:13:38.371737 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:39.750851 containerd[1995]: time="2025-08-19T00:13:39.750713969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:39.752529 containerd[1995]: time="2025-08-19T00:13:39.751959161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 19 00:13:39.754750 containerd[1995]: time="2025-08-19T00:13:39.754643849Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:39.759868 containerd[1995]: time="2025-08-19T00:13:39.759781445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:39.761713 containerd[1995]: time="2025-08-19T00:13:39.761197157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.133213179s" Aug 19 00:13:39.761713 containerd[1995]: time="2025-08-19T00:13:39.761277773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 19 00:13:39.769517 containerd[1995]: time="2025-08-19T00:13:39.769468901Z" level=info msg="CreateContainer within sandbox \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 19 00:13:39.787705 containerd[1995]: time="2025-08-19T00:13:39.787624325Z" level=info msg="Container cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:39.793294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809502644.mount: Deactivated successfully. Aug 19 00:13:39.809377 containerd[1995]: time="2025-08-19T00:13:39.809255681Z" level=info msg="CreateContainer within sandbox \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\"" Aug 19 00:13:39.811303 containerd[1995]: time="2025-08-19T00:13:39.810309341Z" level=info msg="StartContainer for \"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\"" Aug 19 00:13:39.813780 containerd[1995]: time="2025-08-19T00:13:39.813727661Z" level=info msg="connecting to shim cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531" address="unix:///run/containerd/s/1a6f8c19ca71b334a4397b14a98afeff649b4356ecfc6b05bcf81689830e3c4d" protocol=ttrpc version=3 Aug 19 00:13:39.859435 systemd[1]: Started cri-containerd-cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531.scope - libcontainer container cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531. Aug 19 00:13:39.938560 containerd[1995]: time="2025-08-19T00:13:39.938504958Z" level=info msg="StartContainer for \"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\" returns successfully" Aug 19 00:13:40.371583 kubelet[3316]: E0819 00:13:40.371501 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:41.299824 systemd[1]: cri-containerd-cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531.scope: Deactivated successfully. Aug 19 00:13:41.301078 systemd[1]: cri-containerd-cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531.scope: Consumed 947ms CPU time, 185.6M memory peak, 165.8M written to disk. Aug 19 00:13:41.305188 containerd[1995]: time="2025-08-19T00:13:41.305039369Z" level=info msg="received exit event container_id:\"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\" id:\"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\" pid:4157 exited_at:{seconds:1755562421 nanos:303970181}" Aug 19 00:13:41.306189 containerd[1995]: time="2025-08-19T00:13:41.305927873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\" id:\"cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531\" pid:4157 exited_at:{seconds:1755562421 nanos:303970181}" Aug 19 00:13:41.345374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbeff50378f1bd7eec140e9b9fe4080f0cb8fe505c712e2515d79c505caee531-rootfs.mount: Deactivated successfully. Aug 19 00:13:41.357012 kubelet[3316]: I0819 00:13:41.356940 3316 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 00:13:41.475496 systemd[1]: Created slice kubepods-burstable-pod3a6c1d9e_4bf2_45ff_bf17_eea50440b077.slice - libcontainer container kubepods-burstable-pod3a6c1d9e_4bf2_45ff_bf17_eea50440b077.slice. Aug 19 00:13:41.496273 kubelet[3316]: I0819 00:13:41.494941 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a6c1d9e-4bf2-45ff-bf17-eea50440b077-config-volume\") pod \"coredns-674b8bbfcf-442jd\" (UID: \"3a6c1d9e-4bf2-45ff-bf17-eea50440b077\") " pod="kube-system/coredns-674b8bbfcf-442jd" Aug 19 00:13:41.496116 systemd[1]: Created slice kubepods-burstable-pod489c6088_4872_4e61_a179_8fa73fba0368.slice - libcontainer container kubepods-burstable-pod489c6088_4872_4e61_a179_8fa73fba0368.slice. Aug 19 00:13:41.514046 kubelet[3316]: I0819 00:13:41.500163 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xgt\" (UniqueName: \"kubernetes.io/projected/489c6088-4872-4e61-a179-8fa73fba0368-kube-api-access-t2xgt\") pod \"coredns-674b8bbfcf-47789\" (UID: \"489c6088-4872-4e61-a179-8fa73fba0368\") " pod="kube-system/coredns-674b8bbfcf-47789" Aug 19 00:13:41.514046 kubelet[3316]: I0819 00:13:41.500237 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/489c6088-4872-4e61-a179-8fa73fba0368-config-volume\") pod \"coredns-674b8bbfcf-47789\" (UID: \"489c6088-4872-4e61-a179-8fa73fba0368\") " pod="kube-system/coredns-674b8bbfcf-47789" Aug 19 00:13:41.514046 kubelet[3316]: I0819 00:13:41.500287 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkt5\" (UniqueName: \"kubernetes.io/projected/3a6c1d9e-4bf2-45ff-bf17-eea50440b077-kube-api-access-4pkt5\") pod \"coredns-674b8bbfcf-442jd\" (UID: \"3a6c1d9e-4bf2-45ff-bf17-eea50440b077\") " pod="kube-system/coredns-674b8bbfcf-442jd" Aug 19 00:13:41.523666 systemd[1]: Created slice kubepods-besteffort-pod2abcaa93_c27e_43ad_9578_8f41027d690a.slice - libcontainer container kubepods-besteffort-pod2abcaa93_c27e_43ad_9578_8f41027d690a.slice. Aug 19 00:13:41.570499 systemd[1]: Created slice kubepods-besteffort-pod8705aabd_8208_4698_a3bb_69c333eecff1.slice - libcontainer container kubepods-besteffort-pod8705aabd_8208_4698_a3bb_69c333eecff1.slice. Aug 19 00:13:41.581010 systemd[1]: Created slice kubepods-besteffort-podcc70ae68_1506_43fd_a635_d4aade097cbf.slice - libcontainer container kubepods-besteffort-podcc70ae68_1506_43fd_a635_d4aade097cbf.slice. Aug 19 00:13:41.601091 kubelet[3316]: I0819 00:13:41.601016 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c04bb33-7fc2-4348-831d-39a850510dc6-goldmane-key-pair\") pod \"goldmane-768f4c5c69-q52jp\" (UID: \"6c04bb33-7fc2-4348-831d-39a850510dc6\") " pod="calico-system/goldmane-768f4c5c69-q52jp" Aug 19 00:13:41.601091 kubelet[3316]: I0819 00:13:41.601087 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2abcaa93-c27e-43ad-9578-8f41027d690a-tigera-ca-bundle\") pod \"calico-kube-controllers-669988b64f-smmx4\" (UID: \"2abcaa93-c27e-43ad-9578-8f41027d690a\") " pod="calico-system/calico-kube-controllers-669988b64f-smmx4" Aug 19 00:13:41.603189 kubelet[3316]: I0819 00:13:41.602251 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c04bb33-7fc2-4348-831d-39a850510dc6-config\") pod \"goldmane-768f4c5c69-q52jp\" (UID: \"6c04bb33-7fc2-4348-831d-39a850510dc6\") " pod="calico-system/goldmane-768f4c5c69-q52jp" Aug 19 00:13:41.603189 kubelet[3316]: I0819 00:13:41.602322 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c04bb33-7fc2-4348-831d-39a850510dc6-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-q52jp\" (UID: \"6c04bb33-7fc2-4348-831d-39a850510dc6\") " pod="calico-system/goldmane-768f4c5c69-q52jp" Aug 19 00:13:41.603189 kubelet[3316]: I0819 00:13:41.602407 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8705aabd-8208-4698-a3bb-69c333eecff1-calico-apiserver-certs\") pod \"calico-apiserver-7865f59cf-fwl99\" (UID: \"8705aabd-8208-4698-a3bb-69c333eecff1\") " pod="calico-apiserver/calico-apiserver-7865f59cf-fwl99" Aug 19 00:13:41.603189 kubelet[3316]: I0819 00:13:41.602444 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9q4w\" (UniqueName: \"kubernetes.io/projected/cc70ae68-1506-43fd-a635-d4aade097cbf-kube-api-access-h9q4w\") pod \"calico-apiserver-7865f59cf-x4xb2\" (UID: \"cc70ae68-1506-43fd-a635-d4aade097cbf\") " pod="calico-apiserver/calico-apiserver-7865f59cf-x4xb2" Aug 19 00:13:41.603189 kubelet[3316]: I0819 00:13:41.602506 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2cz\" (UniqueName: \"kubernetes.io/projected/8705aabd-8208-4698-a3bb-69c333eecff1-kube-api-access-zm2cz\") pod \"calico-apiserver-7865f59cf-fwl99\" (UID: \"8705aabd-8208-4698-a3bb-69c333eecff1\") " pod="calico-apiserver/calico-apiserver-7865f59cf-fwl99" Aug 19 00:13:41.603565 kubelet[3316]: I0819 00:13:41.602573 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjhf\" (UniqueName: \"kubernetes.io/projected/6c04bb33-7fc2-4348-831d-39a850510dc6-kube-api-access-zcjhf\") pod \"goldmane-768f4c5c69-q52jp\" (UID: \"6c04bb33-7fc2-4348-831d-39a850510dc6\") " pod="calico-system/goldmane-768f4c5c69-q52jp" Aug 19 00:13:41.603565 kubelet[3316]: I0819 00:13:41.602615 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxqw\" (UniqueName: \"kubernetes.io/projected/2abcaa93-c27e-43ad-9578-8f41027d690a-kube-api-access-wzxqw\") pod \"calico-kube-controllers-669988b64f-smmx4\" (UID: \"2abcaa93-c27e-43ad-9578-8f41027d690a\") " pod="calico-system/calico-kube-controllers-669988b64f-smmx4" Aug 19 00:13:41.603565 kubelet[3316]: I0819 00:13:41.602656 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc70ae68-1506-43fd-a635-d4aade097cbf-calico-apiserver-certs\") pod \"calico-apiserver-7865f59cf-x4xb2\" (UID: \"cc70ae68-1506-43fd-a635-d4aade097cbf\") " pod="calico-apiserver/calico-apiserver-7865f59cf-x4xb2" Aug 19 00:13:41.615066 systemd[1]: Created slice kubepods-besteffort-poda6823718_d417_4ba3_ae8c_c7f4d8d33bd6.slice - libcontainer container kubepods-besteffort-poda6823718_d417_4ba3_ae8c_c7f4d8d33bd6.slice. Aug 19 00:13:41.636895 systemd[1]: Created slice kubepods-besteffort-pod6c04bb33_7fc2_4348_831d_39a850510dc6.slice - libcontainer container kubepods-besteffort-pod6c04bb33_7fc2_4348_831d_39a850510dc6.slice. Aug 19 00:13:41.688020 containerd[1995]: time="2025-08-19T00:13:41.687910627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 19 00:13:41.704169 kubelet[3316]: I0819 00:13:41.703918 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-ca-bundle\") pod \"whisker-6cb6c68854-sthf5\" (UID: \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\") " pod="calico-system/whisker-6cb6c68854-sthf5" Aug 19 00:13:41.704169 kubelet[3316]: I0819 00:13:41.704018 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwbk\" (UniqueName: \"kubernetes.io/projected/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-kube-api-access-bzwbk\") pod \"whisker-6cb6c68854-sthf5\" (UID: \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\") " pod="calico-system/whisker-6cb6c68854-sthf5" Aug 19 00:13:41.705670 kubelet[3316]: I0819 00:13:41.705624 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-backend-key-pair\") pod \"whisker-6cb6c68854-sthf5\" (UID: \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\") " pod="calico-system/whisker-6cb6c68854-sthf5" Aug 19 00:13:41.828887 containerd[1995]: time="2025-08-19T00:13:41.828492907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47789,Uid:489c6088-4872-4e61-a179-8fa73fba0368,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:41.832174 containerd[1995]: time="2025-08-19T00:13:41.831935611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669988b64f-smmx4,Uid:2abcaa93-c27e-43ad-9578-8f41027d690a,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:41.833502 containerd[1995]: time="2025-08-19T00:13:41.833235463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-442jd,Uid:3a6c1d9e-4bf2-45ff-bf17-eea50440b077,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:41.893154 containerd[1995]: time="2025-08-19T00:13:41.892950680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-fwl99,Uid:8705aabd-8208-4698-a3bb-69c333eecff1,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:41.894984 containerd[1995]: time="2025-08-19T00:13:41.894905480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-x4xb2,Uid:cc70ae68-1506-43fd-a635-d4aade097cbf,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:41.930422 containerd[1995]: time="2025-08-19T00:13:41.930201008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cb6c68854-sthf5,Uid:a6823718-d417-4ba3-ae8c-c7f4d8d33bd6,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:41.953565 containerd[1995]: time="2025-08-19T00:13:41.953498504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q52jp,Uid:6c04bb33-7fc2-4348-831d-39a850510dc6,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:42.133681 containerd[1995]: time="2025-08-19T00:13:42.133520081Z" level=error msg="Failed to destroy network for sandbox \"58e877b8856467191d3cd579b4b5fbb0c75a4f382bb1ce36ed07d29502a52ec9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.138887 containerd[1995]: time="2025-08-19T00:13:42.138779561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47789,Uid:489c6088-4872-4e61-a179-8fa73fba0368,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e877b8856467191d3cd579b4b5fbb0c75a4f382bb1ce36ed07d29502a52ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.139547 kubelet[3316]: E0819 00:13:42.139331 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e877b8856467191d3cd579b4b5fbb0c75a4f382bb1ce36ed07d29502a52ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.139680 kubelet[3316]: E0819 00:13:42.139572 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e877b8856467191d3cd579b4b5fbb0c75a4f382bb1ce36ed07d29502a52ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-47789" Aug 19 00:13:42.139680 kubelet[3316]: E0819 00:13:42.139610 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e877b8856467191d3cd579b4b5fbb0c75a4f382bb1ce36ed07d29502a52ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-47789" Aug 19 00:13:42.139808 kubelet[3316]: E0819 00:13:42.139700 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-47789_kube-system(489c6088-4872-4e61-a179-8fa73fba0368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-47789_kube-system(489c6088-4872-4e61-a179-8fa73fba0368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58e877b8856467191d3cd579b4b5fbb0c75a4f382bb1ce36ed07d29502a52ec9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-47789" podUID="489c6088-4872-4e61-a179-8fa73fba0368" Aug 19 00:13:42.226566 containerd[1995]: time="2025-08-19T00:13:42.226435061Z" level=error msg="Failed to destroy network for sandbox \"da7167a148cc85bcf4614d49fa3217fbcab3201baef5ab9e4d7d2e8bcab2712b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.230397 containerd[1995]: time="2025-08-19T00:13:42.228881777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669988b64f-smmx4,Uid:2abcaa93-c27e-43ad-9578-8f41027d690a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da7167a148cc85bcf4614d49fa3217fbcab3201baef5ab9e4d7d2e8bcab2712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.232496 kubelet[3316]: E0819 00:13:42.232419 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da7167a148cc85bcf4614d49fa3217fbcab3201baef5ab9e4d7d2e8bcab2712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.232662 kubelet[3316]: E0819 00:13:42.232512 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da7167a148cc85bcf4614d49fa3217fbcab3201baef5ab9e4d7d2e8bcab2712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669988b64f-smmx4" Aug 19 00:13:42.232662 kubelet[3316]: E0819 00:13:42.232549 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da7167a148cc85bcf4614d49fa3217fbcab3201baef5ab9e4d7d2e8bcab2712b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669988b64f-smmx4" Aug 19 00:13:42.232662 kubelet[3316]: E0819 00:13:42.232624 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669988b64f-smmx4_calico-system(2abcaa93-c27e-43ad-9578-8f41027d690a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669988b64f-smmx4_calico-system(2abcaa93-c27e-43ad-9578-8f41027d690a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da7167a148cc85bcf4614d49fa3217fbcab3201baef5ab9e4d7d2e8bcab2712b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669988b64f-smmx4" podUID="2abcaa93-c27e-43ad-9578-8f41027d690a" Aug 19 00:13:42.243474 containerd[1995]: time="2025-08-19T00:13:42.243377621Z" level=error msg="Failed to destroy network for sandbox \"d5ea444224987cf94acd189f0db8696b455a79e0d7363124af1eebacd83ad8c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.247072 containerd[1995]: time="2025-08-19T00:13:42.246982877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-442jd,Uid:3a6c1d9e-4bf2-45ff-bf17-eea50440b077,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ea444224987cf94acd189f0db8696b455a79e0d7363124af1eebacd83ad8c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.247949 kubelet[3316]: E0819 00:13:42.247745 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ea444224987cf94acd189f0db8696b455a79e0d7363124af1eebacd83ad8c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.247949 kubelet[3316]: E0819 00:13:42.247821 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ea444224987cf94acd189f0db8696b455a79e0d7363124af1eebacd83ad8c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-442jd" Aug 19 00:13:42.247949 kubelet[3316]: E0819 00:13:42.247859 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ea444224987cf94acd189f0db8696b455a79e0d7363124af1eebacd83ad8c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-442jd" Aug 19 00:13:42.249909 kubelet[3316]: E0819 00:13:42.249392 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-442jd_kube-system(3a6c1d9e-4bf2-45ff-bf17-eea50440b077)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-442jd_kube-system(3a6c1d9e-4bf2-45ff-bf17-eea50440b077)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5ea444224987cf94acd189f0db8696b455a79e0d7363124af1eebacd83ad8c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-442jd" podUID="3a6c1d9e-4bf2-45ff-bf17-eea50440b077" Aug 19 00:13:42.251400 containerd[1995]: time="2025-08-19T00:13:42.251298630Z" level=error msg="Failed to destroy network for sandbox \"af912921e320aab4e83b97249f82ac877041eacd4607da20fa3d96d8d8d0048b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.253398 containerd[1995]: time="2025-08-19T00:13:42.253308810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-x4xb2,Uid:cc70ae68-1506-43fd-a635-d4aade097cbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af912921e320aab4e83b97249f82ac877041eacd4607da20fa3d96d8d8d0048b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.255004 kubelet[3316]: E0819 00:13:42.254915 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af912921e320aab4e83b97249f82ac877041eacd4607da20fa3d96d8d8d0048b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.255362 kubelet[3316]: E0819 00:13:42.254999 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af912921e320aab4e83b97249f82ac877041eacd4607da20fa3d96d8d8d0048b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7865f59cf-x4xb2" Aug 19 00:13:42.255362 kubelet[3316]: E0819 00:13:42.255094 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af912921e320aab4e83b97249f82ac877041eacd4607da20fa3d96d8d8d0048b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7865f59cf-x4xb2" Aug 19 00:13:42.255362 kubelet[3316]: E0819 00:13:42.255313 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7865f59cf-x4xb2_calico-apiserver(cc70ae68-1506-43fd-a635-d4aade097cbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7865f59cf-x4xb2_calico-apiserver(cc70ae68-1506-43fd-a635-d4aade097cbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af912921e320aab4e83b97249f82ac877041eacd4607da20fa3d96d8d8d0048b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7865f59cf-x4xb2" podUID="cc70ae68-1506-43fd-a635-d4aade097cbf" Aug 19 00:13:42.290715 containerd[1995]: time="2025-08-19T00:13:42.290476578Z" level=error msg="Failed to destroy network for sandbox \"4f59df5964b714dc7586056b4b3484d73f6eeb6d85ff6bec1e3b3a000f99d5a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.293034 containerd[1995]: time="2025-08-19T00:13:42.292664502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-fwl99,Uid:8705aabd-8208-4698-a3bb-69c333eecff1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f59df5964b714dc7586056b4b3484d73f6eeb6d85ff6bec1e3b3a000f99d5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.293656 kubelet[3316]: E0819 00:13:42.293452 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f59df5964b714dc7586056b4b3484d73f6eeb6d85ff6bec1e3b3a000f99d5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.293656 kubelet[3316]: E0819 00:13:42.293630 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f59df5964b714dc7586056b4b3484d73f6eeb6d85ff6bec1e3b3a000f99d5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7865f59cf-fwl99" Aug 19 00:13:42.296171 kubelet[3316]: E0819 00:13:42.293665 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f59df5964b714dc7586056b4b3484d73f6eeb6d85ff6bec1e3b3a000f99d5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7865f59cf-fwl99" Aug 19 00:13:42.296171 kubelet[3316]: E0819 00:13:42.293754 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7865f59cf-fwl99_calico-apiserver(8705aabd-8208-4698-a3bb-69c333eecff1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7865f59cf-fwl99_calico-apiserver(8705aabd-8208-4698-a3bb-69c333eecff1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f59df5964b714dc7586056b4b3484d73f6eeb6d85ff6bec1e3b3a000f99d5a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7865f59cf-fwl99" podUID="8705aabd-8208-4698-a3bb-69c333eecff1" Aug 19 00:13:42.305098 containerd[1995]: time="2025-08-19T00:13:42.305034054Z" level=error msg="Failed to destroy network for sandbox \"839c27874f53fe4e871cf754fd0680eadc9cf28616ec9f8d7b3df934de692ae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.306662 containerd[1995]: time="2025-08-19T00:13:42.306570966Z" level=error msg="Failed to destroy network for sandbox \"716643be747e29bed5e141e7a82728dc4fa049f24ef0f4d2dc478f5129785e85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.307393 containerd[1995]: time="2025-08-19T00:13:42.307324782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q52jp,Uid:6c04bb33-7fc2-4348-831d-39a850510dc6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c27874f53fe4e871cf754fd0680eadc9cf28616ec9f8d7b3df934de692ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.307866 kubelet[3316]: E0819 00:13:42.307807 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c27874f53fe4e871cf754fd0680eadc9cf28616ec9f8d7b3df934de692ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.307968 kubelet[3316]: E0819 00:13:42.307940 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c27874f53fe4e871cf754fd0680eadc9cf28616ec9f8d7b3df934de692ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q52jp" Aug 19 00:13:42.308033 kubelet[3316]: E0819 00:13:42.307979 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c27874f53fe4e871cf754fd0680eadc9cf28616ec9f8d7b3df934de692ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q52jp" Aug 19 00:13:42.308200 kubelet[3316]: E0819 00:13:42.308072 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-q52jp_calico-system(6c04bb33-7fc2-4348-831d-39a850510dc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-q52jp_calico-system(6c04bb33-7fc2-4348-831d-39a850510dc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"839c27874f53fe4e871cf754fd0680eadc9cf28616ec9f8d7b3df934de692ae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-q52jp" podUID="6c04bb33-7fc2-4348-831d-39a850510dc6" Aug 19 00:13:42.308680 containerd[1995]: time="2025-08-19T00:13:42.308574522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cb6c68854-sthf5,Uid:a6823718-d417-4ba3-ae8c-c7f4d8d33bd6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"716643be747e29bed5e141e7a82728dc4fa049f24ef0f4d2dc478f5129785e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.309577 kubelet[3316]: E0819 00:13:42.309279 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716643be747e29bed5e141e7a82728dc4fa049f24ef0f4d2dc478f5129785e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.309577 kubelet[3316]: E0819 00:13:42.309407 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716643be747e29bed5e141e7a82728dc4fa049f24ef0f4d2dc478f5129785e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cb6c68854-sthf5" Aug 19 00:13:42.309577 kubelet[3316]: E0819 00:13:42.309445 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716643be747e29bed5e141e7a82728dc4fa049f24ef0f4d2dc478f5129785e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cb6c68854-sthf5" Aug 19 00:13:42.309893 kubelet[3316]: E0819 00:13:42.309558 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cb6c68854-sthf5_calico-system(a6823718-d417-4ba3-ae8c-c7f4d8d33bd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cb6c68854-sthf5_calico-system(a6823718-d417-4ba3-ae8c-c7f4d8d33bd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"716643be747e29bed5e141e7a82728dc4fa049f24ef0f4d2dc478f5129785e85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cb6c68854-sthf5" podUID="a6823718-d417-4ba3-ae8c-c7f4d8d33bd6" Aug 19 00:13:42.383068 systemd[1]: Created slice kubepods-besteffort-pod3a3c5b61_646c_40bb_911b_c7ba6082008f.slice - libcontainer container kubepods-besteffort-pod3a3c5b61_646c_40bb_911b_c7ba6082008f.slice. Aug 19 00:13:42.388559 containerd[1995]: time="2025-08-19T00:13:42.388480266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwgr9,Uid:3a3c5b61-646c-40bb-911b-c7ba6082008f,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:42.481154 containerd[1995]: time="2025-08-19T00:13:42.481069663Z" level=error msg="Failed to destroy network for sandbox \"d8ac465901c4e743df0674e12fcbe8c50a8d0bf16472e9709b11f53b192eb8ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.484955 containerd[1995]: time="2025-08-19T00:13:42.484833511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwgr9,Uid:3a3c5b61-646c-40bb-911b-c7ba6082008f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ac465901c4e743df0674e12fcbe8c50a8d0bf16472e9709b11f53b192eb8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.485871 kubelet[3316]: E0819 00:13:42.485480 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ac465901c4e743df0674e12fcbe8c50a8d0bf16472e9709b11f53b192eb8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 00:13:42.485871 kubelet[3316]: E0819 00:13:42.485564 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ac465901c4e743df0674e12fcbe8c50a8d0bf16472e9709b11f53b192eb8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:42.485871 kubelet[3316]: E0819 00:13:42.485598 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ac465901c4e743df0674e12fcbe8c50a8d0bf16472e9709b11f53b192eb8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wwgr9" Aug 19 00:13:42.487248 kubelet[3316]: E0819 00:13:42.485884 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wwgr9_calico-system(3a3c5b61-646c-40bb-911b-c7ba6082008f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wwgr9_calico-system(3a3c5b61-646c-40bb-911b-c7ba6082008f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8ac465901c4e743df0674e12fcbe8c50a8d0bf16472e9709b11f53b192eb8ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wwgr9" podUID="3a3c5b61-646c-40bb-911b-c7ba6082008f" Aug 19 00:13:42.489102 systemd[1]: run-netns-cni\x2d66d84475\x2daf17\x2dfaf8\x2d2e84\x2dc9f8d53aacce.mount: Deactivated successfully. Aug 19 00:13:49.980750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098203104.mount: Deactivated successfully. Aug 19 00:13:50.055299 containerd[1995]: time="2025-08-19T00:13:50.055092192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:50.056830 containerd[1995]: time="2025-08-19T00:13:50.056615028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 19 00:13:50.058148 containerd[1995]: time="2025-08-19T00:13:50.058020180Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:50.061499 containerd[1995]: time="2025-08-19T00:13:50.061421640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:50.062722 containerd[1995]: time="2025-08-19T00:13:50.062539164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 8.374566293s" Aug 19 00:13:50.062722 containerd[1995]: time="2025-08-19T00:13:50.062596200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 19 00:13:50.107282 containerd[1995]: time="2025-08-19T00:13:50.107176549Z" level=info msg="CreateContainer within sandbox \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 19 00:13:50.123231 containerd[1995]: time="2025-08-19T00:13:50.122461009Z" level=info msg="Container f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:50.146299 containerd[1995]: time="2025-08-19T00:13:50.146247217Z" level=info msg="CreateContainer within sandbox \"05e32d2d5d6594a5c362e597bf78d6d57ec7f7a5ef8dfed9a0425c8f01b4ad2a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\"" Aug 19 00:13:50.149417 containerd[1995]: time="2025-08-19T00:13:50.149370685Z" level=info msg="StartContainer for \"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\"" Aug 19 00:13:50.152978 containerd[1995]: time="2025-08-19T00:13:50.152914369Z" level=info msg="connecting to shim f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7" address="unix:///run/containerd/s/1a6f8c19ca71b334a4397b14a98afeff649b4356ecfc6b05bcf81689830e3c4d" protocol=ttrpc version=3 Aug 19 00:13:50.189446 systemd[1]: Started cri-containerd-f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7.scope - libcontainer container f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7. Aug 19 00:13:50.287973 containerd[1995]: time="2025-08-19T00:13:50.287799289Z" level=info msg="StartContainer for \"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\" returns successfully" Aug 19 00:13:50.547495 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 19 00:13:50.547622 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 19 00:13:50.813167 kubelet[3316]: I0819 00:13:50.812902 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z4bjt" podStartSLOduration=1.920447638 podStartE2EDuration="20.812829736s" podCreationTimestamp="2025-08-19 00:13:30 +0000 UTC" firstStartedPulling="2025-08-19 00:13:31.171952662 +0000 UTC m=+32.038421452" lastFinishedPulling="2025-08-19 00:13:50.06433476 +0000 UTC m=+50.930803550" observedRunningTime="2025-08-19 00:13:50.80758672 +0000 UTC m=+51.674055570" watchObservedRunningTime="2025-08-19 00:13:50.812829736 +0000 UTC m=+51.679298514" Aug 19 00:13:50.883521 kubelet[3316]: I0819 00:13:50.883454 3316 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-backend-key-pair\") pod \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\" (UID: \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\") " Aug 19 00:13:50.883679 kubelet[3316]: I0819 00:13:50.883530 3316 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzwbk\" (UniqueName: \"kubernetes.io/projected/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-kube-api-access-bzwbk\") pod \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\" (UID: \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\") " Aug 19 00:13:50.883679 kubelet[3316]: I0819 00:13:50.883572 3316 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-ca-bundle\") pod \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\" (UID: \"a6823718-d417-4ba3-ae8c-c7f4d8d33bd6\") " Aug 19 00:13:50.885185 kubelet[3316]: I0819 00:13:50.884254 3316 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a6823718-d417-4ba3-ae8c-c7f4d8d33bd6" (UID: "a6823718-d417-4ba3-ae8c-c7f4d8d33bd6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 00:13:50.900612 kubelet[3316]: I0819 00:13:50.900504 3316 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a6823718-d417-4ba3-ae8c-c7f4d8d33bd6" (UID: "a6823718-d417-4ba3-ae8c-c7f4d8d33bd6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 00:13:50.903616 kubelet[3316]: I0819 00:13:50.903542 3316 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-kube-api-access-bzwbk" (OuterVolumeSpecName: "kube-api-access-bzwbk") pod "a6823718-d417-4ba3-ae8c-c7f4d8d33bd6" (UID: "a6823718-d417-4ba3-ae8c-c7f4d8d33bd6"). InnerVolumeSpecName "kube-api-access-bzwbk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:13:50.982315 systemd[1]: var-lib-kubelet-pods-a6823718\x2dd417\x2d4ba3\x2dae8c\x2dc7f4d8d33bd6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzwbk.mount: Deactivated successfully. Aug 19 00:13:50.982919 systemd[1]: var-lib-kubelet-pods-a6823718\x2dd417\x2d4ba3\x2dae8c\x2dc7f4d8d33bd6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 19 00:13:50.984970 kubelet[3316]: I0819 00:13:50.984944 3316 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-backend-key-pair\") on node \"ip-172-31-18-236\" DevicePath \"\"" Aug 19 00:13:50.985041 kubelet[3316]: I0819 00:13:50.984987 3316 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bzwbk\" (UniqueName: \"kubernetes.io/projected/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-kube-api-access-bzwbk\") on node \"ip-172-31-18-236\" DevicePath \"\"" Aug 19 00:13:50.985041 kubelet[3316]: I0819 00:13:50.985012 3316 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6-whisker-ca-bundle\") on node \"ip-172-31-18-236\" DevicePath \"\"" Aug 19 00:13:51.135197 containerd[1995]: time="2025-08-19T00:13:51.134965742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\" id:\"4890e77e176905529489ee8f85222b0cf8b0b1944e6000ec4b6723183e87cba3\" pid:4471 exit_status:1 exited_at:{seconds:1755562431 nanos:133687406}" Aug 19 00:13:51.388110 systemd[1]: Removed slice kubepods-besteffort-poda6823718_d417_4ba3_ae8c_c7f4d8d33bd6.slice - libcontainer container kubepods-besteffort-poda6823718_d417_4ba3_ae8c_c7f4d8d33bd6.slice. Aug 19 00:13:51.883026 systemd[1]: Created slice kubepods-besteffort-podfb53a5aa_059a_46b6_8f63_ddbddfdb512e.slice - libcontainer container kubepods-besteffort-podfb53a5aa_059a_46b6_8f63_ddbddfdb512e.slice. Aug 19 00:13:51.993178 kubelet[3316]: I0819 00:13:51.992519 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb53a5aa-059a-46b6-8f63-ddbddfdb512e-whisker-ca-bundle\") pod \"whisker-ff848f574-mbbcm\" (UID: \"fb53a5aa-059a-46b6-8f63-ddbddfdb512e\") " pod="calico-system/whisker-ff848f574-mbbcm" Aug 19 00:13:51.993178 kubelet[3316]: I0819 00:13:51.992614 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb53a5aa-059a-46b6-8f63-ddbddfdb512e-whisker-backend-key-pair\") pod \"whisker-ff848f574-mbbcm\" (UID: \"fb53a5aa-059a-46b6-8f63-ddbddfdb512e\") " pod="calico-system/whisker-ff848f574-mbbcm" Aug 19 00:13:51.993178 kubelet[3316]: I0819 00:13:51.992665 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4gz\" (UniqueName: \"kubernetes.io/projected/fb53a5aa-059a-46b6-8f63-ddbddfdb512e-kube-api-access-qd4gz\") pod \"whisker-ff848f574-mbbcm\" (UID: \"fb53a5aa-059a-46b6-8f63-ddbddfdb512e\") " pod="calico-system/whisker-ff848f574-mbbcm" Aug 19 00:13:52.042111 containerd[1995]: time="2025-08-19T00:13:52.042049946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\" id:\"5505019d1f85685b8344650e5927f62765f7f5670c34e307e5a20bb05b2ccdfe\" pid:4517 exit_status:1 exited_at:{seconds:1755562432 nanos:41063090}" Aug 19 00:13:52.192272 containerd[1995]: time="2025-08-19T00:13:52.192182019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff848f574-mbbcm,Uid:fb53a5aa-059a-46b6-8f63-ddbddfdb512e,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:53.377120 kubelet[3316]: I0819 00:13:53.377012 3316 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6823718-d417-4ba3-ae8c-c7f4d8d33bd6" path="/var/lib/kubelet/pods/a6823718-d417-4ba3-ae8c-c7f4d8d33bd6/volumes" Aug 19 00:13:53.604390 systemd-networkd[1830]: vxlan.calico: Link UP Aug 19 00:13:53.604412 systemd-networkd[1830]: vxlan.calico: Gained carrier Aug 19 00:13:53.614529 (udev-worker)[4450]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:13:53.663658 (udev-worker)[4452]: Network interface NamePolicy= disabled on kernel command line. Aug 19 00:13:53.997553 systemd-networkd[1830]: cali7fe5745c9fa: Link UP Aug 19 00:13:54.003305 systemd-networkd[1830]: cali7fe5745c9fa: Gained carrier Aug 19 00:13:54.117729 containerd[1995]: 2025-08-19 00:13:52.253 [INFO][4532] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 00:13:54.117729 containerd[1995]: 2025-08-19 00:13:53.782 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0 whisker-ff848f574- calico-system fb53a5aa-059a-46b6-8f63-ddbddfdb512e 949 0 2025-08-19 00:13:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:ff848f574 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-236 whisker-ff848f574-mbbcm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7fe5745c9fa [] [] }} ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-" Aug 19 00:13:54.117729 containerd[1995]: 2025-08-19 00:13:53.782 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.117729 containerd[1995]: 2025-08-19 00:13:53.870 [INFO][4700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" HandleID="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Workload="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.871 [INFO][4700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" HandleID="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Workload="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000394170), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-236", "pod":"whisker-ff848f574-mbbcm", "timestamp":"2025-08-19 00:13:53.870683743 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.871 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.871 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.871 [INFO][4700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.887 [INFO][4700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" host="ip-172-31-18-236" Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.899 [INFO][4700] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.915 [INFO][4700] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.918 [INFO][4700] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.119621 containerd[1995]: 2025-08-19 00:13:53.923 [INFO][4700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.923 [INFO][4700] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" host="ip-172-31-18-236" Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.927 [INFO][4700] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93 Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.935 [INFO][4700] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" host="ip-172-31-18-236" Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.945 [INFO][4700] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.65/26] block=192.168.99.64/26 handle="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" host="ip-172-31-18-236" Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.945 [INFO][4700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.65/26] handle="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" host="ip-172-31-18-236" Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.945 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:54.120056 containerd[1995]: 2025-08-19 00:13:53.945 [INFO][4700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.65/26] IPv6=[] ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" HandleID="k8s-pod-network.1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Workload="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.120562 containerd[1995]: 2025-08-19 00:13:53.952 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0", GenerateName:"whisker-ff848f574-", Namespace:"calico-system", SelfLink:"", UID:"fb53a5aa-059a-46b6-8f63-ddbddfdb512e", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ff848f574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"whisker-ff848f574-mbbcm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7fe5745c9fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.120562 containerd[1995]: 2025-08-19 00:13:53.952 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.65/32] ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.121565 containerd[1995]: 2025-08-19 00:13:53.953 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fe5745c9fa ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.121565 containerd[1995]: 2025-08-19 00:13:54.015 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.121671 containerd[1995]: 2025-08-19 00:13:54.017 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0", GenerateName:"whisker-ff848f574-", Namespace:"calico-system", SelfLink:"", UID:"fb53a5aa-059a-46b6-8f63-ddbddfdb512e", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ff848f574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93", Pod:"whisker-ff848f574-mbbcm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7fe5745c9fa", MAC:"52:e0:5b:04:31:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.121816 containerd[1995]: 2025-08-19 00:13:54.109 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" Namespace="calico-system" Pod="whisker-ff848f574-mbbcm" WorkloadEndpoint="ip--172--31--18--236-k8s-whisker--ff848f574--mbbcm-eth0" Aug 19 00:13:54.161467 containerd[1995]: time="2025-08-19T00:13:54.161375309Z" level=info msg="connecting to shim 1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93" address="unix:///run/containerd/s/62d7948720163647b2aab5d819274c34f2035c78dadb1641a4b8d14c19d27183" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:54.220637 systemd[1]: Started cri-containerd-1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93.scope - libcontainer container 1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93. Aug 19 00:13:54.318842 containerd[1995]: time="2025-08-19T00:13:54.318684005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff848f574-mbbcm,Uid:fb53a5aa-059a-46b6-8f63-ddbddfdb512e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93\"" Aug 19 00:13:54.323413 containerd[1995]: time="2025-08-19T00:13:54.323344061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 19 00:13:54.373229 containerd[1995]: time="2025-08-19T00:13:54.373156950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669988b64f-smmx4,Uid:2abcaa93-c27e-43ad-9578-8f41027d690a,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:54.373798 containerd[1995]: time="2025-08-19T00:13:54.373696470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-fwl99,Uid:8705aabd-8208-4698-a3bb-69c333eecff1,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:54.653752 systemd-networkd[1830]: calic4af48c1f49: Link UP Aug 19 00:13:54.659363 systemd-networkd[1830]: calic4af48c1f49: Gained carrier Aug 19 00:13:54.699233 containerd[1995]: 2025-08-19 00:13:54.490 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0 calico-apiserver-7865f59cf- calico-apiserver 8705aabd-8208-4698-a3bb-69c333eecff1 878 0 2025-08-19 00:13:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7865f59cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-236 calico-apiserver-7865f59cf-fwl99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4af48c1f49 [] [] }} ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-" Aug 19 00:13:54.699233 containerd[1995]: 2025-08-19 00:13:54.490 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.699233 containerd[1995]: 2025-08-19 00:13:54.563 [INFO][4819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" HandleID="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Workload="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.563 [INFO][4819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" HandleID="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Workload="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000335670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-236", "pod":"calico-apiserver-7865f59cf-fwl99", "timestamp":"2025-08-19 00:13:54.563077231 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.563 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.564 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.564 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.578 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" host="ip-172-31-18-236" Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.585 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.592 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.595 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.699695 containerd[1995]: 2025-08-19 00:13:54.600 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.600 [INFO][4819] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" host="ip-172-31-18-236" Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.620 [INFO][4819] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903 Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.633 [INFO][4819] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" host="ip-172-31-18-236" Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.645 [INFO][4819] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.66/26] block=192.168.99.64/26 handle="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" host="ip-172-31-18-236" Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.645 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.66/26] handle="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" host="ip-172-31-18-236" Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.645 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:54.700120 containerd[1995]: 2025-08-19 00:13:54.646 [INFO][4819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.66/26] IPv6=[] ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" HandleID="k8s-pod-network.bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Workload="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.701841 containerd[1995]: 2025-08-19 00:13:54.650 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0", GenerateName:"calico-apiserver-7865f59cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"8705aabd-8208-4698-a3bb-69c333eecff1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7865f59cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"calico-apiserver-7865f59cf-fwl99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4af48c1f49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.702011 containerd[1995]: 2025-08-19 00:13:54.650 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.66/32] ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.702011 containerd[1995]: 2025-08-19 00:13:54.650 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4af48c1f49 ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.702011 containerd[1995]: 2025-08-19 00:13:54.655 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.702692 containerd[1995]: 2025-08-19 00:13:54.656 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0", GenerateName:"calico-apiserver-7865f59cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"8705aabd-8208-4698-a3bb-69c333eecff1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7865f59cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903", Pod:"calico-apiserver-7865f59cf-fwl99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4af48c1f49", MAC:"76:a6:f6:b9:90:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.703314 containerd[1995]: 2025-08-19 00:13:54.685 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-fwl99" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--fwl99-eth0" Aug 19 00:13:54.765210 containerd[1995]: time="2025-08-19T00:13:54.764511644Z" level=info msg="connecting to shim bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903" address="unix:///run/containerd/s/750ae207b66dbe009fe616b9e8ab48992f8290cf5cc5f529b880fd16cd49db3b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:54.808432 systemd-networkd[1830]: calieb15e1617bd: Link UP Aug 19 00:13:54.811526 systemd-networkd[1830]: calieb15e1617bd: Gained carrier Aug 19 00:13:54.853985 systemd[1]: Started cri-containerd-bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903.scope - libcontainer container bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903. Aug 19 00:13:54.866039 containerd[1995]: 2025-08-19 00:13:54.507 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0 calico-kube-controllers-669988b64f- calico-system 2abcaa93-c27e-43ad-9578-8f41027d690a 876 0 2025-08-19 00:13:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:669988b64f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-236 calico-kube-controllers-669988b64f-smmx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieb15e1617bd [] [] }} ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-" Aug 19 00:13:54.866039 containerd[1995]: 2025-08-19 00:13:54.508 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.866039 containerd[1995]: 2025-08-19 00:13:54.573 [INFO][4826] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" HandleID="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Workload="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.574 [INFO][4826] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" HandleID="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Workload="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-236", "pod":"calico-kube-controllers-669988b64f-smmx4", "timestamp":"2025-08-19 00:13:54.573887107 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.575 [INFO][4826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.646 [INFO][4826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.646 [INFO][4826] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.687 [INFO][4826] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" host="ip-172-31-18-236" Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.704 [INFO][4826] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.715 [INFO][4826] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.719 [INFO][4826] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.866594 containerd[1995]: 2025-08-19 00:13:54.725 [INFO][4826] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.725 [INFO][4826] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" host="ip-172-31-18-236" Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.730 [INFO][4826] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35 Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.750 [INFO][4826] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" host="ip-172-31-18-236" Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.775 [INFO][4826] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.67/26] block=192.168.99.64/26 handle="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" host="ip-172-31-18-236" Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.775 [INFO][4826] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.67/26] handle="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" host="ip-172-31-18-236" Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.776 [INFO][4826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:54.867305 containerd[1995]: 2025-08-19 00:13:54.776 [INFO][4826] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.67/26] IPv6=[] ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" HandleID="k8s-pod-network.1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Workload="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.868474 containerd[1995]: 2025-08-19 00:13:54.793 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0", GenerateName:"calico-kube-controllers-669988b64f-", Namespace:"calico-system", SelfLink:"", UID:"2abcaa93-c27e-43ad-9578-8f41027d690a", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669988b64f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"calico-kube-controllers-669988b64f-smmx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb15e1617bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.868680 containerd[1995]: 2025-08-19 00:13:54.796 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.67/32] ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.868680 containerd[1995]: 2025-08-19 00:13:54.796 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb15e1617bd ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.868680 containerd[1995]: 2025-08-19 00:13:54.813 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.869324 containerd[1995]: 2025-08-19 00:13:54.819 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0", GenerateName:"calico-kube-controllers-669988b64f-", Namespace:"calico-system", SelfLink:"", UID:"2abcaa93-c27e-43ad-9578-8f41027d690a", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669988b64f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35", Pod:"calico-kube-controllers-669988b64f-smmx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb15e1617bd", MAC:"5e:df:0c:53:d9:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:54.869662 containerd[1995]: 2025-08-19 00:13:54.849 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" Namespace="calico-system" Pod="calico-kube-controllers-669988b64f-smmx4" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--kube--controllers--669988b64f--smmx4-eth0" Aug 19 00:13:54.907232 containerd[1995]: time="2025-08-19T00:13:54.907045964Z" level=info msg="connecting to shim 1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35" address="unix:///run/containerd/s/050b0bfeb4cb30491424f2b10865d2830e0ec2e4d18404bba7c0be1fae637d37" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:54.978930 systemd[1]: Started cri-containerd-1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35.scope - libcontainer container 1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35. Aug 19 00:13:55.071991 containerd[1995]: time="2025-08-19T00:13:55.071800013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-fwl99,Uid:8705aabd-8208-4698-a3bb-69c333eecff1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903\"" Aug 19 00:13:55.169491 containerd[1995]: time="2025-08-19T00:13:55.168668646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669988b64f-smmx4,Uid:2abcaa93-c27e-43ad-9578-8f41027d690a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35\"" Aug 19 00:13:55.227322 systemd-networkd[1830]: vxlan.calico: Gained IPv6LL Aug 19 00:13:55.374620 containerd[1995]: time="2025-08-19T00:13:55.374120851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-x4xb2,Uid:cc70ae68-1506-43fd-a635-d4aade097cbf,Namespace:calico-apiserver,Attempt:0,}" Aug 19 00:13:55.374916 containerd[1995]: time="2025-08-19T00:13:55.374390431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwgr9,Uid:3a3c5b61-646c-40bb-911b-c7ba6082008f,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:55.378384 containerd[1995]: time="2025-08-19T00:13:55.377184319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-442jd,Uid:3a6c1d9e-4bf2-45ff-bf17-eea50440b077,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:55.890579 systemd[1]: Started sshd@7-172.31.18.236:22-147.75.109.163:43250.service - OpenSSH per-connection server daemon (147.75.109.163:43250). Aug 19 00:13:55.994406 systemd-networkd[1830]: cali7fe5745c9fa: Gained IPv6LL Aug 19 00:13:56.122412 systemd-networkd[1830]: calida01b0477eb: Link UP Aug 19 00:13:56.123961 systemd-networkd[1830]: calida01b0477eb: Gained carrier Aug 19 00:13:56.158000 sshd[5005]: Accepted publickey for core from 147.75.109.163 port 43250 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:13:56.163767 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:56.190535 systemd-logind[1978]: New session 8 of user core. Aug 19 00:13:56.194498 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 00:13:56.195230 containerd[1995]: 2025-08-19 00:13:55.815 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0 csi-node-driver- calico-system 3a3c5b61-646c-40bb-911b-c7ba6082008f 823 0 2025-08-19 00:13:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-236 csi-node-driver-wwgr9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calida01b0477eb [] [] }} ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-" Aug 19 00:13:56.195230 containerd[1995]: 2025-08-19 00:13:55.816 [INFO][4966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.195230 containerd[1995]: 2025-08-19 00:13:55.962 [INFO][4989] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" HandleID="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Workload="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:55.971 [INFO][4989] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" HandleID="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Workload="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004daa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-236", "pod":"csi-node-driver-wwgr9", "timestamp":"2025-08-19 00:13:55.962533606 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:55.973 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:55.973 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:55.973 [INFO][4989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:56.012 [INFO][4989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" host="ip-172-31-18-236" Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:56.026 [INFO][4989] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:56.044 [INFO][4989] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:56.048 [INFO][4989] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.195872 containerd[1995]: 2025-08-19 00:13:56.057 [INFO][4989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.057 [INFO][4989] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" host="ip-172-31-18-236" Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.060 [INFO][4989] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.071 [INFO][4989] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" host="ip-172-31-18-236" Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.094 [INFO][4989] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.68/26] block=192.168.99.64/26 handle="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" host="ip-172-31-18-236" Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.094 [INFO][4989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.68/26] handle="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" host="ip-172-31-18-236" Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.095 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:56.196985 containerd[1995]: 2025-08-19 00:13:56.095 [INFO][4989] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.68/26] IPv6=[] ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" HandleID="k8s-pod-network.9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Workload="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.197436 containerd[1995]: 2025-08-19 00:13:56.111 [INFO][4966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3a3c5b61-646c-40bb-911b-c7ba6082008f", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"csi-node-driver-wwgr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida01b0477eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.197570 containerd[1995]: 2025-08-19 00:13:56.111 [INFO][4966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.68/32] ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.197570 containerd[1995]: 2025-08-19 00:13:56.111 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida01b0477eb ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.197570 containerd[1995]: 2025-08-19 00:13:56.125 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.197700 containerd[1995]: 2025-08-19 00:13:56.127 [INFO][4966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3a3c5b61-646c-40bb-911b-c7ba6082008f", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee", Pod:"csi-node-driver-wwgr9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida01b0477eb", MAC:"c6:93:08:64:4b:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.197805 containerd[1995]: 2025-08-19 00:13:56.178 [INFO][4966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" Namespace="calico-system" Pod="csi-node-driver-wwgr9" WorkloadEndpoint="ip--172--31--18--236-k8s-csi--node--driver--wwgr9-eth0" Aug 19 00:13:56.288675 containerd[1995]: time="2025-08-19T00:13:56.288584767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 19 00:13:56.289368 containerd[1995]: time="2025-08-19T00:13:56.289309675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:56.294311 containerd[1995]: time="2025-08-19T00:13:56.294159607Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:56.317037 containerd[1995]: time="2025-08-19T00:13:56.315565123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:56.320246 containerd[1995]: time="2025-08-19T00:13:56.320158699Z" level=info msg="connecting to shim 9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee" address="unix:///run/containerd/s/98824f45c44b6ecde2dd22268cebbdc505d03fb78cd813a1cc3de0c58be13ee0" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:56.324245 containerd[1995]: time="2025-08-19T00:13:56.324084619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.00066683s" Aug 19 00:13:56.325039 containerd[1995]: time="2025-08-19T00:13:56.324721951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 19 00:13:56.337960 containerd[1995]: time="2025-08-19T00:13:56.337693555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:13:56.358820 containerd[1995]: time="2025-08-19T00:13:56.358743704Z" level=info msg="CreateContainer within sandbox \"1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 19 00:13:56.375496 containerd[1995]: time="2025-08-19T00:13:56.375416960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q52jp,Uid:6c04bb33-7fc2-4348-831d-39a850510dc6,Namespace:calico-system,Attempt:0,}" Aug 19 00:13:56.442442 systemd-networkd[1830]: calic4af48c1f49: Gained IPv6LL Aug 19 00:13:56.452483 systemd-networkd[1830]: calif28f2b69375: Link UP Aug 19 00:13:56.453843 systemd-networkd[1830]: calif28f2b69375: Gained carrier Aug 19 00:13:56.464865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409554773.mount: Deactivated successfully. Aug 19 00:13:56.465422 containerd[1995]: time="2025-08-19T00:13:56.465349784Z" level=info msg="Container 15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:56.543216 containerd[1995]: time="2025-08-19T00:13:56.542739573Z" level=info msg="CreateContainer within sandbox \"1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01\"" Aug 19 00:13:56.545240 containerd[1995]: 2025-08-19 00:13:55.836 [INFO][4943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0 calico-apiserver-7865f59cf- calico-apiserver cc70ae68-1506-43fd-a635-d4aade097cbf 879 0 2025-08-19 00:13:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7865f59cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-236 calico-apiserver-7865f59cf-x4xb2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif28f2b69375 [] [] }} ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-" Aug 19 00:13:56.545240 containerd[1995]: 2025-08-19 00:13:55.836 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.545240 containerd[1995]: 2025-08-19 00:13:56.049 [INFO][5001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" HandleID="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Workload="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.051 [INFO][5001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" HandleID="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Workload="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034f250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-236", "pod":"calico-apiserver-7865f59cf-x4xb2", "timestamp":"2025-08-19 00:13:56.049832574 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.052 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.095 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.095 [INFO][5001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.149 [INFO][5001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" host="ip-172-31-18-236" Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.163 [INFO][5001] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.217 [INFO][5001] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.258 [INFO][5001] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.545572 containerd[1995]: 2025-08-19 00:13:56.265 [INFO][5001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.266 [INFO][5001] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" host="ip-172-31-18-236" Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.272 [INFO][5001] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50 Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.288 [INFO][5001] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" host="ip-172-31-18-236" Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.346 [INFO][5001] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.69/26] block=192.168.99.64/26 handle="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" host="ip-172-31-18-236" Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.346 [INFO][5001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.69/26] handle="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" host="ip-172-31-18-236" Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.347 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:56.546009 containerd[1995]: 2025-08-19 00:13:56.347 [INFO][5001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.69/26] IPv6=[] ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" HandleID="k8s-pod-network.7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Workload="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.549560 containerd[1995]: 2025-08-19 00:13:56.387 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0", GenerateName:"calico-apiserver-7865f59cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc70ae68-1506-43fd-a635-d4aade097cbf", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7865f59cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"calico-apiserver-7865f59cf-x4xb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif28f2b69375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.549711 containerd[1995]: 2025-08-19 00:13:56.391 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.69/32] ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.549711 containerd[1995]: 2025-08-19 00:13:56.392 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif28f2b69375 ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.549711 containerd[1995]: 2025-08-19 00:13:56.463 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.549841 containerd[1995]: 2025-08-19 00:13:56.471 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0", GenerateName:"calico-apiserver-7865f59cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc70ae68-1506-43fd-a635-d4aade097cbf", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7865f59cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50", Pod:"calico-apiserver-7865f59cf-x4xb2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif28f2b69375", MAC:"2e:ac:ad:a5:50:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.549956 containerd[1995]: 2025-08-19 00:13:56.521 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" Namespace="calico-apiserver" Pod="calico-apiserver-7865f59cf-x4xb2" WorkloadEndpoint="ip--172--31--18--236-k8s-calico--apiserver--7865f59cf--x4xb2-eth0" Aug 19 00:13:56.549956 containerd[1995]: time="2025-08-19T00:13:56.545243973Z" level=info msg="StartContainer for \"15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01\"" Aug 19 00:13:56.555165 containerd[1995]: time="2025-08-19T00:13:56.554472861Z" level=info msg="connecting to shim 15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01" address="unix:///run/containerd/s/62d7948720163647b2aab5d819274c34f2035c78dadb1641a4b8d14c19d27183" protocol=ttrpc version=3 Aug 19 00:13:56.570381 systemd-networkd[1830]: calieb15e1617bd: Gained IPv6LL Aug 19 00:13:56.645940 systemd[1]: Started cri-containerd-9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee.scope - libcontainer container 9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee. Aug 19 00:13:56.724257 sshd[5021]: Connection closed by 147.75.109.163 port 43250 Aug 19 00:13:56.722518 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:56.744827 systemd[1]: sshd@7-172.31.18.236:22-147.75.109.163:43250.service: Deactivated successfully. Aug 19 00:13:56.751390 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 00:13:56.760613 systemd-logind[1978]: Session 8 logged out. Waiting for processes to exit. Aug 19 00:13:56.765307 systemd-logind[1978]: Removed session 8. Aug 19 00:13:56.770035 systemd-networkd[1830]: cali02236fee974: Link UP Aug 19 00:13:56.775051 systemd-networkd[1830]: cali02236fee974: Gained carrier Aug 19 00:13:56.797412 containerd[1995]: time="2025-08-19T00:13:56.797187478Z" level=info msg="connecting to shim 7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50" address="unix:///run/containerd/s/47a4f6e4f3468c5b9379a046d495446062540db6d510a232ea39862f0ee812a6" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:56.849028 systemd[1]: Started cri-containerd-15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01.scope - libcontainer container 15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01. Aug 19 00:13:56.876233 containerd[1995]: 2025-08-19 00:13:55.804 [INFO][4949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0 coredns-674b8bbfcf- kube-system 3a6c1d9e-4bf2-45ff-bf17-eea50440b077 875 0 2025-08-19 00:13:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-236 coredns-674b8bbfcf-442jd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02236fee974 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-" Aug 19 00:13:56.876233 containerd[1995]: 2025-08-19 00:13:55.810 [INFO][4949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.876233 containerd[1995]: 2025-08-19 00:13:56.072 [INFO][4992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" HandleID="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Workload="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.072 [INFO][4992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" HandleID="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Workload="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000411110), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-236", "pod":"coredns-674b8bbfcf-442jd", "timestamp":"2025-08-19 00:13:56.072508578 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.072 [INFO][4992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.347 [INFO][4992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.347 [INFO][4992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.485 [INFO][4992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" host="ip-172-31-18-236" Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.534 [INFO][4992] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.583 [INFO][4992] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.589 [INFO][4992] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.877395 containerd[1995]: 2025-08-19 00:13:56.609 [INFO][4992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.610 [INFO][4992] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" host="ip-172-31-18-236" Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.625 [INFO][4992] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.667 [INFO][4992] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" host="ip-172-31-18-236" Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.700 [INFO][4992] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.70/26] block=192.168.99.64/26 handle="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" host="ip-172-31-18-236" Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.700 [INFO][4992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.70/26] handle="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" host="ip-172-31-18-236" Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.700 [INFO][4992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:56.877958 containerd[1995]: 2025-08-19 00:13:56.700 [INFO][4992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.70/26] IPv6=[] ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" HandleID="k8s-pod-network.7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Workload="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.879468 containerd[1995]: 2025-08-19 00:13:56.711 [INFO][4949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3a6c1d9e-4bf2-45ff-bf17-eea50440b077", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"coredns-674b8bbfcf-442jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02236fee974", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.879468 containerd[1995]: 2025-08-19 00:13:56.711 [INFO][4949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.70/32] ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.879468 containerd[1995]: 2025-08-19 00:13:56.711 [INFO][4949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02236fee974 ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.879468 containerd[1995]: 2025-08-19 00:13:56.777 [INFO][4949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.879468 containerd[1995]: 2025-08-19 00:13:56.817 [INFO][4949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3a6c1d9e-4bf2-45ff-bf17-eea50440b077", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f", Pod:"coredns-674b8bbfcf-442jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02236fee974", MAC:"9a:3c:c0:5d:e6:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:56.879468 containerd[1995]: 2025-08-19 00:13:56.852 [INFO][4949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" Namespace="kube-system" Pod="coredns-674b8bbfcf-442jd" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--442jd-eth0" Aug 19 00:13:56.931478 systemd[1]: Started cri-containerd-7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50.scope - libcontainer container 7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50. Aug 19 00:13:57.000969 containerd[1995]: time="2025-08-19T00:13:57.000270727Z" level=info msg="connecting to shim 7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f" address="unix:///run/containerd/s/d034c1d69ac0d3a159ccc4e1f184bbe7ecc7aa4efc5f781caf7b6ca8563a5c86" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:57.123824 containerd[1995]: time="2025-08-19T00:13:57.122976223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwgr9,Uid:3a3c5b61-646c-40bb-911b-c7ba6082008f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee\"" Aug 19 00:13:57.189418 systemd[1]: Started cri-containerd-7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f.scope - libcontainer container 7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f. Aug 19 00:13:57.375568 containerd[1995]: time="2025-08-19T00:13:57.375468333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47789,Uid:489c6088-4872-4e61-a179-8fa73fba0368,Namespace:kube-system,Attempt:0,}" Aug 19 00:13:57.402552 systemd-networkd[1830]: calida01b0477eb: Gained IPv6LL Aug 19 00:13:57.441057 containerd[1995]: time="2025-08-19T00:13:57.440881089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7865f59cf-x4xb2,Uid:cc70ae68-1506-43fd-a635-d4aade097cbf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50\"" Aug 19 00:13:57.461585 systemd-networkd[1830]: calidf8fc768581: Link UP Aug 19 00:13:57.463974 systemd-networkd[1830]: calidf8fc768581: Gained carrier Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:56.917 [INFO][5052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0 goldmane-768f4c5c69- calico-system 6c04bb33-7fc2-4348-831d-39a850510dc6 882 0 2025-08-19 00:13:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-236 goldmane-768f4c5c69-q52jp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidf8fc768581 [] [] }} ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:56.920 [INFO][5052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.180 [INFO][5157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" HandleID="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Workload="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.182 [INFO][5157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" HandleID="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Workload="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-236", "pod":"goldmane-768f4c5c69-q52jp", "timestamp":"2025-08-19 00:13:57.18035486 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.182 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.182 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.183 [INFO][5157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.214 [INFO][5157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.230 [INFO][5157] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.250 [INFO][5157] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.290 [INFO][5157] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.326 [INFO][5157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.327 [INFO][5157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.338 [INFO][5157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.373 [INFO][5157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.422 [INFO][5157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.71/26] block=192.168.99.64/26 handle="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.423 [INFO][5157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.71/26] handle="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" host="ip-172-31-18-236" Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.423 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:57.532752 containerd[1995]: 2025-08-19 00:13:57.424 [INFO][5157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.71/26] IPv6=[] ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" HandleID="k8s-pod-network.2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Workload="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.535898 containerd[1995]: 2025-08-19 00:13:57.437 [INFO][5052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"6c04bb33-7fc2-4348-831d-39a850510dc6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"goldmane-768f4c5c69-q52jp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf8fc768581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:57.535898 containerd[1995]: 2025-08-19 00:13:57.440 [INFO][5052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.71/32] ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.535898 containerd[1995]: 2025-08-19 00:13:57.440 [INFO][5052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf8fc768581 ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.535898 containerd[1995]: 2025-08-19 00:13:57.470 [INFO][5052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.535898 containerd[1995]: 2025-08-19 00:13:57.477 [INFO][5052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"6c04bb33-7fc2-4348-831d-39a850510dc6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f", Pod:"goldmane-768f4c5c69-q52jp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf8fc768581", MAC:"f2:30:41:76:49:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:57.535898 containerd[1995]: 2025-08-19 00:13:57.522 [INFO][5052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" Namespace="calico-system" Pod="goldmane-768f4c5c69-q52jp" WorkloadEndpoint="ip--172--31--18--236-k8s-goldmane--768f4c5c69--q52jp-eth0" Aug 19 00:13:57.651859 containerd[1995]: time="2025-08-19T00:13:57.651398062Z" level=info msg="connecting to shim 2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f" address="unix:///run/containerd/s/f3687f25fa151c22c6b81adf60ceeb94adafed85b7411cdc7d1f0b200d2922e1" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:57.705354 containerd[1995]: time="2025-08-19T00:13:57.704752978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-442jd,Uid:3a6c1d9e-4bf2-45ff-bf17-eea50440b077,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f\"" Aug 19 00:13:57.715689 containerd[1995]: time="2025-08-19T00:13:57.715424494Z" level=info msg="StartContainer for \"15aac8641ef2196441d547c60fc9b21b882c86cedcad24194bed54efb7739a01\" returns successfully" Aug 19 00:13:57.726906 containerd[1995]: time="2025-08-19T00:13:57.726687514Z" level=info msg="CreateContainer within sandbox \"7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:13:57.792091 containerd[1995]: time="2025-08-19T00:13:57.791417579Z" level=info msg="Container 5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:57.821485 containerd[1995]: time="2025-08-19T00:13:57.821411135Z" level=info msg="CreateContainer within sandbox \"7f746e461dc8310aaaab0e316e08994bfaac9b1e8d327ce74c70c63b3f48499f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb\"" Aug 19 00:13:57.824444 containerd[1995]: time="2025-08-19T00:13:57.824020415Z" level=info msg="StartContainer for \"5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb\"" Aug 19 00:13:57.842611 containerd[1995]: time="2025-08-19T00:13:57.842534747Z" level=info msg="connecting to shim 5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb" address="unix:///run/containerd/s/d034c1d69ac0d3a159ccc4e1f184bbe7ecc7aa4efc5f781caf7b6ca8563a5c86" protocol=ttrpc version=3 Aug 19 00:13:57.851027 systemd[1]: Started cri-containerd-2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f.scope - libcontainer container 2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f. Aug 19 00:13:58.008434 systemd[1]: Started cri-containerd-5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb.scope - libcontainer container 5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb. Aug 19 00:13:58.149444 containerd[1995]: time="2025-08-19T00:13:58.147324932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q52jp,Uid:6c04bb33-7fc2-4348-831d-39a850510dc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f\"" Aug 19 00:13:58.162925 systemd-networkd[1830]: cali5f28823d414: Link UP Aug 19 00:13:58.165983 systemd-networkd[1830]: cali5f28823d414: Gained carrier Aug 19 00:13:58.219916 containerd[1995]: time="2025-08-19T00:13:58.219853737Z" level=info msg="StartContainer for \"5ec4e204f12c39d75b374e68566cd854d938a492b3932a84c9a81b7d9e5066eb\" returns successfully" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.660 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0 coredns-674b8bbfcf- kube-system 489c6088-4872-4e61-a179-8fa73fba0368 877 0 2025-08-19 00:13:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-236 coredns-674b8bbfcf-47789 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f28823d414 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.664 [INFO][5224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.953 [INFO][5283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" HandleID="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Workload="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.954 [INFO][5283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" HandleID="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Workload="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321490), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-236", "pod":"coredns-674b8bbfcf-47789", "timestamp":"2025-08-19 00:13:57.950903795 +0000 UTC"}, Hostname:"ip-172-31-18-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.954 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.955 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.955 [INFO][5283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-236' Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:57.989 [INFO][5283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.007 [INFO][5283] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.037 [INFO][5283] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.046 [INFO][5283] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.055 [INFO][5283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.056 [INFO][5283] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.062 [INFO][5283] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.084 [INFO][5283] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.119 [INFO][5283] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.72/26] block=192.168.99.64/26 handle="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.120 [INFO][5283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.72/26] handle="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" host="ip-172-31-18-236" Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.120 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 00:13:58.221184 containerd[1995]: 2025-08-19 00:13:58.120 [INFO][5283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.72/26] IPv6=[] ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" HandleID="k8s-pod-network.b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Workload="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.223891 containerd[1995]: 2025-08-19 00:13:58.136 [INFO][5224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"489c6088-4872-4e61-a179-8fa73fba0368", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"", Pod:"coredns-674b8bbfcf-47789", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f28823d414", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:58.223891 containerd[1995]: 2025-08-19 00:13:58.136 [INFO][5224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.72/32] ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.223891 containerd[1995]: 2025-08-19 00:13:58.136 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f28823d414 ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.223891 containerd[1995]: 2025-08-19 00:13:58.170 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.223891 containerd[1995]: 2025-08-19 00:13:58.180 [INFO][5224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"489c6088-4872-4e61-a179-8fa73fba0368", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 0, 13, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-236", ContainerID:"b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf", Pod:"coredns-674b8bbfcf-47789", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f28823d414", MAC:"de:89:ff:4c:55:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 00:13:58.223891 containerd[1995]: 2025-08-19 00:13:58.205 [INFO][5224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" Namespace="kube-system" Pod="coredns-674b8bbfcf-47789" WorkloadEndpoint="ip--172--31--18--236-k8s-coredns--674b8bbfcf--47789-eth0" Aug 19 00:13:58.305547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155741542.mount: Deactivated successfully. Aug 19 00:13:58.363157 containerd[1995]: time="2025-08-19T00:13:58.362728054Z" level=info msg="connecting to shim b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf" address="unix:///run/containerd/s/b4b476f97a12e0006111a60a3c229b6d5f7a60c7ff6445b8a9e30f226892bb24" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:13:58.427547 systemd-networkd[1830]: calif28f2b69375: Gained IPv6LL Aug 19 00:13:58.427982 systemd-networkd[1830]: cali02236fee974: Gained IPv6LL Aug 19 00:13:58.473897 systemd[1]: Started cri-containerd-b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf.scope - libcontainer container b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf. Aug 19 00:13:58.675283 containerd[1995]: time="2025-08-19T00:13:58.675229799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47789,Uid:489c6088-4872-4e61-a179-8fa73fba0368,Namespace:kube-system,Attempt:0,} returns sandbox id \"b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf\"" Aug 19 00:13:58.694346 containerd[1995]: time="2025-08-19T00:13:58.694204043Z" level=info msg="CreateContainer within sandbox \"b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:13:58.752432 containerd[1995]: time="2025-08-19T00:13:58.752352863Z" level=info msg="Container 1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:13:58.754276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774849521.mount: Deactivated successfully. Aug 19 00:13:58.775094 containerd[1995]: time="2025-08-19T00:13:58.774922680Z" level=info msg="CreateContainer within sandbox \"b82feabe261cc7d3e256b66aa6a445dcfd40fecff2629947ccb0c7457ab39acf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847\"" Aug 19 00:13:58.776304 containerd[1995]: time="2025-08-19T00:13:58.776107464Z" level=info msg="StartContainer for \"1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847\"" Aug 19 00:13:58.779213 containerd[1995]: time="2025-08-19T00:13:58.778841616Z" level=info msg="connecting to shim 1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847" address="unix:///run/containerd/s/b4b476f97a12e0006111a60a3c229b6d5f7a60c7ff6445b8a9e30f226892bb24" protocol=ttrpc version=3 Aug 19 00:13:58.869457 systemd[1]: Started cri-containerd-1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847.scope - libcontainer container 1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847. Aug 19 00:13:59.002450 systemd-networkd[1830]: calidf8fc768581: Gained IPv6LL Aug 19 00:13:59.032105 kubelet[3316]: I0819 00:13:59.031735 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-442jd" podStartSLOduration=54.031714473 podStartE2EDuration="54.031714473s" podCreationTimestamp="2025-08-19 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:13:59.024797457 +0000 UTC m=+59.891266259" watchObservedRunningTime="2025-08-19 00:13:59.031714473 +0000 UTC m=+59.898183263" Aug 19 00:13:59.075284 containerd[1995]: time="2025-08-19T00:13:59.075197757Z" level=info msg="StartContainer for \"1529ab7bb2110fb79d30a869e1532363f74f0fad4514c24266142d076fb22847\" returns successfully" Aug 19 00:13:59.899576 systemd-networkd[1830]: cali5f28823d414: Gained IPv6LL Aug 19 00:14:00.065185 kubelet[3316]: I0819 00:14:00.063493 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-47789" podStartSLOduration=55.063470278 podStartE2EDuration="55.063470278s" podCreationTimestamp="2025-08-19 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:00.058895014 +0000 UTC m=+60.925363852" watchObservedRunningTime="2025-08-19 00:14:00.063470278 +0000 UTC m=+60.929939068" Aug 19 00:14:00.408170 containerd[1995]: time="2025-08-19T00:14:00.407927112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:00.433161 containerd[1995]: time="2025-08-19T00:14:00.409966356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 19 00:14:00.433161 containerd[1995]: time="2025-08-19T00:14:00.412198104Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:00.434552 containerd[1995]: time="2025-08-19T00:14:00.416873760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 4.078242597s" Aug 19 00:14:00.434552 containerd[1995]: time="2025-08-19T00:14:00.434445732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:14:00.435859 containerd[1995]: time="2025-08-19T00:14:00.435755604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:00.439846 containerd[1995]: time="2025-08-19T00:14:00.439780404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 19 00:14:00.445167 containerd[1995]: time="2025-08-19T00:14:00.445020288Z" level=info msg="CreateContainer within sandbox \"bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:14:00.466424 containerd[1995]: time="2025-08-19T00:14:00.466343736Z" level=info msg="Container 6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:00.487800 containerd[1995]: time="2025-08-19T00:14:00.487723824Z" level=info msg="CreateContainer within sandbox \"bf48e86f3be89ff58705528381e289e9590a0d45635b97c93a01af9fba974903\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2\"" Aug 19 00:14:00.489483 containerd[1995]: time="2025-08-19T00:14:00.489300096Z" level=info msg="StartContainer for \"6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2\"" Aug 19 00:14:00.497788 containerd[1995]: time="2025-08-19T00:14:00.496097184Z" level=info msg="connecting to shim 6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2" address="unix:///run/containerd/s/750ae207b66dbe009fe616b9e8ab48992f8290cf5cc5f529b880fd16cd49db3b" protocol=ttrpc version=3 Aug 19 00:14:00.590435 systemd[1]: Started cri-containerd-6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2.scope - libcontainer container 6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2. Aug 19 00:14:00.680870 containerd[1995]: time="2025-08-19T00:14:00.680626309Z" level=info msg="StartContainer for \"6f4eabf13926a8be41f4069553cbc85ad23c0eaab73c7592b460af83543c7da2\" returns successfully" Aug 19 00:14:01.089768 kubelet[3316]: I0819 00:14:01.089554 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7865f59cf-fwl99" podStartSLOduration=35.728448824 podStartE2EDuration="41.089532263s" podCreationTimestamp="2025-08-19 00:13:20 +0000 UTC" firstStartedPulling="2025-08-19 00:13:55.078189713 +0000 UTC m=+55.944658503" lastFinishedPulling="2025-08-19 00:14:00.439273152 +0000 UTC m=+61.305741942" observedRunningTime="2025-08-19 00:14:01.056057987 +0000 UTC m=+61.922526777" watchObservedRunningTime="2025-08-19 00:14:01.089532263 +0000 UTC m=+61.956001065" Aug 19 00:14:01.773611 systemd[1]: Started sshd@8-172.31.18.236:22-147.75.109.163:36008.service - OpenSSH per-connection server daemon (147.75.109.163:36008). Aug 19 00:14:02.030235 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 36008 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:02.037573 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:02.040709 kubelet[3316]: I0819 00:14:02.040428 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:02.056477 systemd-logind[1978]: New session 9 of user core. Aug 19 00:14:02.066591 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 00:14:02.119567 ntpd[1971]: Listen normally on 8 vxlan.calico 192.168.99.64:123 Aug 19 00:14:02.119701 ntpd[1971]: Listen normally on 9 vxlan.calico [fe80::6469:6bff:fe3a:9393%4]:123 Aug 19 00:14:02.120149 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 8 vxlan.calico 192.168.99.64:123 Aug 19 00:14:02.120149 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 9 vxlan.calico [fe80::6469:6bff:fe3a:9393%4]:123 Aug 19 00:14:02.120149 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 10 cali7fe5745c9fa [fe80::ecee:eeff:feee:eeee%7]:123 Aug 19 00:14:02.120149 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 11 calic4af48c1f49 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 19 00:14:02.120149 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 12 calieb15e1617bd [fe80::ecee:eeff:feee:eeee%9]:123 Aug 19 00:14:02.120149 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 13 calida01b0477eb [fe80::ecee:eeff:feee:eeee%10]:123 Aug 19 00:14:02.119779 ntpd[1971]: Listen normally on 10 cali7fe5745c9fa [fe80::ecee:eeff:feee:eeee%7]:123 Aug 19 00:14:02.120478 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 14 calif28f2b69375 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 19 00:14:02.120478 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 15 cali02236fee974 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 19 00:14:02.120478 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 16 calidf8fc768581 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 19 00:14:02.119846 ntpd[1971]: Listen normally on 11 calic4af48c1f49 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 19 00:14:02.119908 ntpd[1971]: Listen normally on 12 calieb15e1617bd [fe80::ecee:eeff:feee:eeee%9]:123 Aug 19 00:14:02.119973 ntpd[1971]: Listen normally on 13 calida01b0477eb [fe80::ecee:eeff:feee:eeee%10]:123 Aug 19 00:14:02.120185 ntpd[1971]: Listen normally on 14 calif28f2b69375 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 19 00:14:02.120262 ntpd[1971]: Listen normally on 15 cali02236fee974 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 19 00:14:02.120425 ntpd[1971]: Listen normally on 16 calidf8fc768581 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 19 00:14:02.121926 ntpd[1971]: Listen normally on 17 cali5f28823d414 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 19 00:14:02.122114 ntpd[1971]: 19 Aug 00:14:02 ntpd[1971]: Listen normally on 17 cali5f28823d414 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 19 00:14:02.482915 sshd[5510]: Connection closed by 147.75.109.163 port 36008 Aug 19 00:14:02.483468 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:02.497695 systemd-logind[1978]: Session 9 logged out. Waiting for processes to exit. Aug 19 00:14:02.500683 systemd[1]: sshd@8-172.31.18.236:22-147.75.109.163:36008.service: Deactivated successfully. Aug 19 00:14:02.512888 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 00:14:02.518471 systemd-logind[1978]: Removed session 9. Aug 19 00:14:07.092204 containerd[1995]: time="2025-08-19T00:14:07.091775657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:07.094083 containerd[1995]: time="2025-08-19T00:14:07.094004897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 19 00:14:07.096530 containerd[1995]: time="2025-08-19T00:14:07.096455645Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:07.101220 containerd[1995]: time="2025-08-19T00:14:07.101154989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:07.102604 containerd[1995]: time="2025-08-19T00:14:07.102554945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 6.662710401s" Aug 19 00:14:07.102879 containerd[1995]: time="2025-08-19T00:14:07.102743669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 19 00:14:07.105325 containerd[1995]: time="2025-08-19T00:14:07.105279413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 19 00:14:07.141364 containerd[1995]: time="2025-08-19T00:14:07.141267269Z" level=info msg="CreateContainer within sandbox \"1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 19 00:14:07.161647 containerd[1995]: time="2025-08-19T00:14:07.160249013Z" level=info msg="Container 655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:07.168957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369284497.mount: Deactivated successfully. Aug 19 00:14:07.182202 containerd[1995]: time="2025-08-19T00:14:07.181990745Z" level=info msg="CreateContainer within sandbox \"1b1beecfdca8a4f82216038fe8ee793a72554d5218c1fb67da4ee9703880ce35\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\"" Aug 19 00:14:07.182877 containerd[1995]: time="2025-08-19T00:14:07.182809505Z" level=info msg="StartContainer for \"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\"" Aug 19 00:14:07.185551 containerd[1995]: time="2025-08-19T00:14:07.185460569Z" level=info msg="connecting to shim 655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816" address="unix:///run/containerd/s/050b0bfeb4cb30491424f2b10865d2830e0ec2e4d18404bba7c0be1fae637d37" protocol=ttrpc version=3 Aug 19 00:14:07.232482 systemd[1]: Started cri-containerd-655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816.scope - libcontainer container 655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816. Aug 19 00:14:07.317797 containerd[1995]: time="2025-08-19T00:14:07.317752638Z" level=info msg="StartContainer for \"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\" returns successfully" Aug 19 00:14:07.524018 systemd[1]: Started sshd@9-172.31.18.236:22-147.75.109.163:36012.service - OpenSSH per-connection server daemon (147.75.109.163:36012). Aug 19 00:14:07.765018 sshd[5597]: Accepted publickey for core from 147.75.109.163 port 36012 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:07.805049 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:07.814597 systemd-logind[1978]: New session 10 of user core. Aug 19 00:14:07.822369 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 00:14:08.112288 kubelet[3316]: I0819 00:14:08.111876 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-669988b64f-smmx4" podStartSLOduration=26.182125039 podStartE2EDuration="38.11185119s" podCreationTimestamp="2025-08-19 00:13:30 +0000 UTC" firstStartedPulling="2025-08-19 00:13:55.174640014 +0000 UTC m=+56.041108804" lastFinishedPulling="2025-08-19 00:14:07.104366117 +0000 UTC m=+67.970834955" observedRunningTime="2025-08-19 00:14:08.105572646 +0000 UTC m=+68.972041544" watchObservedRunningTime="2025-08-19 00:14:08.11185119 +0000 UTC m=+68.978320004" Aug 19 00:14:08.133031 sshd[5600]: Connection closed by 147.75.109.163 port 36012 Aug 19 00:14:08.133430 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:08.144609 systemd[1]: sshd@9-172.31.18.236:22-147.75.109.163:36012.service: Deactivated successfully. Aug 19 00:14:08.148642 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 00:14:08.153465 systemd-logind[1978]: Session 10 logged out. Waiting for processes to exit. Aug 19 00:14:08.181551 systemd[1]: Started sshd@10-172.31.18.236:22-147.75.109.163:59550.service - OpenSSH per-connection server daemon (147.75.109.163:59550). Aug 19 00:14:08.184788 systemd-logind[1978]: Removed session 10. Aug 19 00:14:08.186282 containerd[1995]: time="2025-08-19T00:14:08.185907942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\" id:\"5cc2c4921c805c37bfdead39636b39035de180baceec19c7f66e8700313f6615\" pid:5621 exited_at:{seconds:1755562448 nanos:184292358}" Aug 19 00:14:08.417262 sshd[5633]: Accepted publickey for core from 147.75.109.163 port 59550 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:08.419693 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:08.428336 systemd-logind[1978]: New session 11 of user core. Aug 19 00:14:08.436389 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 00:14:08.846792 sshd[5638]: Connection closed by 147.75.109.163 port 59550 Aug 19 00:14:08.848474 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:08.867806 systemd[1]: sshd@10-172.31.18.236:22-147.75.109.163:59550.service: Deactivated successfully. Aug 19 00:14:08.881002 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 00:14:08.884896 systemd-logind[1978]: Session 11 logged out. Waiting for processes to exit. Aug 19 00:14:08.921595 systemd[1]: Started sshd@11-172.31.18.236:22-147.75.109.163:59552.service - OpenSSH per-connection server daemon (147.75.109.163:59552). Aug 19 00:14:08.924747 systemd-logind[1978]: Removed session 11. Aug 19 00:14:08.995855 containerd[1995]: time="2025-08-19T00:14:08.994934626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:08.995855 containerd[1995]: time="2025-08-19T00:14:08.995119366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 19 00:14:08.997040 containerd[1995]: time="2025-08-19T00:14:08.996770386Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:09.002910 containerd[1995]: time="2025-08-19T00:14:09.002845266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:09.004913 containerd[1995]: time="2025-08-19T00:14:09.004740582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.899233685s" Aug 19 00:14:09.004913 containerd[1995]: time="2025-08-19T00:14:09.004796322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 19 00:14:09.007619 containerd[1995]: time="2025-08-19T00:14:09.007494042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 00:14:09.014329 containerd[1995]: time="2025-08-19T00:14:09.013979982Z" level=info msg="CreateContainer within sandbox \"9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 19 00:14:09.033458 containerd[1995]: time="2025-08-19T00:14:09.033379783Z" level=info msg="Container 2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:09.059777 containerd[1995]: time="2025-08-19T00:14:09.059679343Z" level=info msg="CreateContainer within sandbox \"9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f\"" Aug 19 00:14:09.061531 containerd[1995]: time="2025-08-19T00:14:09.061465375Z" level=info msg="StartContainer for \"2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f\"" Aug 19 00:14:09.065739 containerd[1995]: time="2025-08-19T00:14:09.065665003Z" level=info msg="connecting to shim 2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f" address="unix:///run/containerd/s/98824f45c44b6ecde2dd22268cebbdc505d03fb78cd813a1cc3de0c58be13ee0" protocol=ttrpc version=3 Aug 19 00:14:09.121456 systemd[1]: Started cri-containerd-2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f.scope - libcontainer container 2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f. Aug 19 00:14:09.159975 sshd[5654]: Accepted publickey for core from 147.75.109.163 port 59552 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:09.163069 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:09.173440 systemd-logind[1978]: New session 12 of user core. Aug 19 00:14:09.179372 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 00:14:09.263156 containerd[1995]: time="2025-08-19T00:14:09.262668380Z" level=info msg="StartContainer for \"2610ce45f1118dfabd2c8f205437de62506d6ada2ed6bdc250013e4c85399a2f\" returns successfully" Aug 19 00:14:09.356285 containerd[1995]: time="2025-08-19T00:14:09.356208620Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:09.360155 containerd[1995]: time="2025-08-19T00:14:09.359232596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 19 00:14:09.363101 containerd[1995]: time="2025-08-19T00:14:09.363003548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 355.417346ms" Aug 19 00:14:09.363101 containerd[1995]: time="2025-08-19T00:14:09.363094676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 19 00:14:09.366901 containerd[1995]: time="2025-08-19T00:14:09.366823736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 19 00:14:09.376438 containerd[1995]: time="2025-08-19T00:14:09.375594728Z" level=info msg="CreateContainer within sandbox \"7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 00:14:09.405156 containerd[1995]: time="2025-08-19T00:14:09.404625368Z" level=info msg="Container 5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:09.424892 containerd[1995]: time="2025-08-19T00:14:09.424790324Z" level=info msg="CreateContainer within sandbox \"7d90d96d2e07da61881da1c3491dbc15d9254aafe5595efa1b449873d61d4b50\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6\"" Aug 19 00:14:09.427434 containerd[1995]: time="2025-08-19T00:14:09.427342089Z" level=info msg="StartContainer for \"5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6\"" Aug 19 00:14:09.431380 containerd[1995]: time="2025-08-19T00:14:09.431200317Z" level=info msg="connecting to shim 5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6" address="unix:///run/containerd/s/47a4f6e4f3468c5b9379a046d495446062540db6d510a232ea39862f0ee812a6" protocol=ttrpc version=3 Aug 19 00:14:09.489422 sshd[5677]: Connection closed by 147.75.109.163 port 59552 Aug 19 00:14:09.487592 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:09.492894 systemd[1]: Started cri-containerd-5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6.scope - libcontainer container 5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6. Aug 19 00:14:09.504608 systemd[1]: sshd@11-172.31.18.236:22-147.75.109.163:59552.service: Deactivated successfully. Aug 19 00:14:09.514719 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 00:14:09.519312 systemd-logind[1978]: Session 12 logged out. Waiting for processes to exit. Aug 19 00:14:09.524913 systemd-logind[1978]: Removed session 12. Aug 19 00:14:09.628804 containerd[1995]: time="2025-08-19T00:14:09.628463458Z" level=info msg="StartContainer for \"5faca410ad709149e5389ded3ea4313dd0a05c5fbf387a037bff18b2ea1353a6\" returns successfully" Aug 19 00:14:10.124736 kubelet[3316]: I0819 00:14:10.123626 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7865f59cf-x4xb2" podStartSLOduration=38.203384661 podStartE2EDuration="50.12360152s" podCreationTimestamp="2025-08-19 00:13:20 +0000 UTC" firstStartedPulling="2025-08-19 00:13:57.444103137 +0000 UTC m=+58.310571927" lastFinishedPulling="2025-08-19 00:14:09.364319996 +0000 UTC m=+70.230788786" observedRunningTime="2025-08-19 00:14:10.122355452 +0000 UTC m=+70.988824314" watchObservedRunningTime="2025-08-19 00:14:10.12360152 +0000 UTC m=+70.990070310" Aug 19 00:14:11.103323 kubelet[3316]: I0819 00:14:11.102647 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:13.760965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024744161.mount: Deactivated successfully. Aug 19 00:14:13.778033 containerd[1995]: time="2025-08-19T00:14:13.777971486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:13.779480 containerd[1995]: time="2025-08-19T00:14:13.779390426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 19 00:14:13.780881 containerd[1995]: time="2025-08-19T00:14:13.780817442Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:13.787215 containerd[1995]: time="2025-08-19T00:14:13.787103642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:13.791504 containerd[1995]: time="2025-08-19T00:14:13.790934474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.424038066s" Aug 19 00:14:13.791504 containerd[1995]: time="2025-08-19T00:14:13.791011598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 19 00:14:13.797102 containerd[1995]: time="2025-08-19T00:14:13.797010422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 19 00:14:13.805901 containerd[1995]: time="2025-08-19T00:14:13.805774010Z" level=info msg="CreateContainer within sandbox \"1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 19 00:14:13.818231 containerd[1995]: time="2025-08-19T00:14:13.816707462Z" level=info msg="Container 63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:13.840907 containerd[1995]: time="2025-08-19T00:14:13.840812450Z" level=info msg="CreateContainer within sandbox \"1305d48e57e1e0f0a6ac80d2cd3a79d4cf8c740ca0aef3613e4c0ae4d7036b93\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4\"" Aug 19 00:14:13.842650 containerd[1995]: time="2025-08-19T00:14:13.842533334Z" level=info msg="StartContainer for \"63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4\"" Aug 19 00:14:13.845336 containerd[1995]: time="2025-08-19T00:14:13.845267054Z" level=info msg="connecting to shim 63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4" address="unix:///run/containerd/s/62d7948720163647b2aab5d819274c34f2035c78dadb1641a4b8d14c19d27183" protocol=ttrpc version=3 Aug 19 00:14:13.896807 systemd[1]: Started cri-containerd-63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4.scope - libcontainer container 63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4. Aug 19 00:14:14.018909 containerd[1995]: time="2025-08-19T00:14:14.018467999Z" level=info msg="StartContainer for \"63ecc9584805f2ad9cb7971ee9d3028cb083172f8ace7cea3c9859de37506ab4\" returns successfully" Aug 19 00:14:14.529399 systemd[1]: Started sshd@12-172.31.18.236:22-147.75.109.163:59558.service - OpenSSH per-connection server daemon (147.75.109.163:59558). Aug 19 00:14:14.784837 sshd[5790]: Accepted publickey for core from 147.75.109.163 port 59558 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:14.789182 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:14.805229 systemd-logind[1978]: New session 13 of user core. Aug 19 00:14:14.812442 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 00:14:15.308558 sshd[5793]: Connection closed by 147.75.109.163 port 59558 Aug 19 00:14:15.308453 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:15.319031 systemd[1]: sshd@12-172.31.18.236:22-147.75.109.163:59558.service: Deactivated successfully. Aug 19 00:14:15.326511 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 00:14:15.330215 systemd-logind[1978]: Session 13 logged out. Waiting for processes to exit. Aug 19 00:14:15.335495 systemd-logind[1978]: Removed session 13. Aug 19 00:14:16.140022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849167147.mount: Deactivated successfully. Aug 19 00:14:16.866761 containerd[1995]: time="2025-08-19T00:14:16.866676917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:16.869492 containerd[1995]: time="2025-08-19T00:14:16.869426813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 19 00:14:16.872111 containerd[1995]: time="2025-08-19T00:14:16.872058341Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:16.878704 containerd[1995]: time="2025-08-19T00:14:16.878552826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:16.881545 containerd[1995]: time="2025-08-19T00:14:16.881407614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.084055384s" Aug 19 00:14:16.881545 containerd[1995]: time="2025-08-19T00:14:16.881468658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 19 00:14:16.883571 containerd[1995]: time="2025-08-19T00:14:16.883477098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 19 00:14:16.891312 containerd[1995]: time="2025-08-19T00:14:16.891171594Z" level=info msg="CreateContainer within sandbox \"2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 19 00:14:16.907528 containerd[1995]: time="2025-08-19T00:14:16.907400982Z" level=info msg="Container 87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:16.929891 containerd[1995]: time="2025-08-19T00:14:16.929761302Z" level=info msg="CreateContainer within sandbox \"2dd5ec8056461c468047dc49be6c212e9da20a5b9ed38e05c64f963a5267661f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\"" Aug 19 00:14:16.930851 containerd[1995]: time="2025-08-19T00:14:16.930767970Z" level=info msg="StartContainer for \"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\"" Aug 19 00:14:16.938613 containerd[1995]: time="2025-08-19T00:14:16.938180022Z" level=info msg="connecting to shim 87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91" address="unix:///run/containerd/s/f3687f25fa151c22c6b81adf60ceeb94adafed85b7411cdc7d1f0b200d2922e1" protocol=ttrpc version=3 Aug 19 00:14:16.992459 systemd[1]: Started cri-containerd-87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91.scope - libcontainer container 87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91. Aug 19 00:14:17.087172 containerd[1995]: time="2025-08-19T00:14:17.087067995Z" level=info msg="StartContainer for \"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" returns successfully" Aug 19 00:14:17.199056 kubelet[3316]: I0819 00:14:17.198943 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-ff848f574-mbbcm" podStartSLOduration=6.726512302 podStartE2EDuration="26.198921579s" podCreationTimestamp="2025-08-19 00:13:51 +0000 UTC" firstStartedPulling="2025-08-19 00:13:54.321992057 +0000 UTC m=+55.188460847" lastFinishedPulling="2025-08-19 00:14:13.794401334 +0000 UTC m=+74.660870124" observedRunningTime="2025-08-19 00:14:14.178696068 +0000 UTC m=+75.045164870" watchObservedRunningTime="2025-08-19 00:14:17.198921579 +0000 UTC m=+78.065390357" Aug 19 00:14:17.202532 kubelet[3316]: I0819 00:14:17.200966 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-q52jp" podStartSLOduration=29.47633161 podStartE2EDuration="48.200110839s" podCreationTimestamp="2025-08-19 00:13:29 +0000 UTC" firstStartedPulling="2025-08-19 00:13:58.158912649 +0000 UTC m=+59.025381427" lastFinishedPulling="2025-08-19 00:14:16.882691866 +0000 UTC m=+77.749160656" observedRunningTime="2025-08-19 00:14:17.194977035 +0000 UTC m=+78.061445837" watchObservedRunningTime="2025-08-19 00:14:17.200110839 +0000 UTC m=+78.066579629" Aug 19 00:14:17.417922 containerd[1995]: time="2025-08-19T00:14:17.417872968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" id:\"4b0aa0a3df504727e30bfc94d905d6e5575bba1a7889d8fa11e867703a997dc7\" pid:5858 exit_status:1 exited_at:{seconds:1755562457 nanos:417375052}" Aug 19 00:14:18.426465 containerd[1995]: time="2025-08-19T00:14:18.426293561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" id:\"e36aaa1afbfe385edb31f4ab85f7b898f57f5dc8715eacdddb52cfdc517aa522\" pid:5888 exit_status:1 exited_at:{seconds:1755562458 nanos:425575049}" Aug 19 00:14:18.433707 containerd[1995]: time="2025-08-19T00:14:18.433623389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:18.436099 containerd[1995]: time="2025-08-19T00:14:18.436027145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 19 00:14:18.439070 containerd[1995]: time="2025-08-19T00:14:18.438977393Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:18.467102 containerd[1995]: time="2025-08-19T00:14:18.467016533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:18.469916 containerd[1995]: time="2025-08-19T00:14:18.469735553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.586196967s" Aug 19 00:14:18.469916 containerd[1995]: time="2025-08-19T00:14:18.469788581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 19 00:14:18.480519 containerd[1995]: time="2025-08-19T00:14:18.480287573Z" level=info msg="CreateContainer within sandbox \"9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 19 00:14:18.503161 containerd[1995]: time="2025-08-19T00:14:18.501524022Z" level=info msg="Container cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:18.525777 containerd[1995]: time="2025-08-19T00:14:18.525707850Z" level=info msg="CreateContainer within sandbox \"9b919ff157709353c09901de66dc70e98cefdd15266310d5b8476f93f8fc4aee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd\"" Aug 19 00:14:18.527959 containerd[1995]: time="2025-08-19T00:14:18.527375790Z" level=info msg="StartContainer for \"cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd\"" Aug 19 00:14:18.531189 containerd[1995]: time="2025-08-19T00:14:18.531101562Z" level=info msg="connecting to shim cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd" address="unix:///run/containerd/s/98824f45c44b6ecde2dd22268cebbdc505d03fb78cd813a1cc3de0c58be13ee0" protocol=ttrpc version=3 Aug 19 00:14:18.573460 systemd[1]: Started cri-containerd-cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd.scope - libcontainer container cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd. Aug 19 00:14:18.653957 containerd[1995]: time="2025-08-19T00:14:18.653890794Z" level=info msg="StartContainer for \"cfa8af55ea2d69a94bdf6dde666d75e8ba1b51a0ff95f98bc0c1031a701fc4cd\" returns successfully" Aug 19 00:14:19.561634 kubelet[3316]: I0819 00:14:19.561569 3316 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 19 00:14:19.561634 kubelet[3316]: I0819 00:14:19.561634 3316 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 19 00:14:20.347026 systemd[1]: Started sshd@13-172.31.18.236:22-147.75.109.163:55814.service - OpenSSH per-connection server daemon (147.75.109.163:55814). Aug 19 00:14:20.567540 sshd[5936]: Accepted publickey for core from 147.75.109.163 port 55814 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:20.571278 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:20.581236 systemd-logind[1978]: New session 14 of user core. Aug 19 00:14:20.586458 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 00:14:20.837653 sshd[5939]: Connection closed by 147.75.109.163 port 55814 Aug 19 00:14:20.838461 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:20.845987 systemd[1]: sshd@13-172.31.18.236:22-147.75.109.163:55814.service: Deactivated successfully. Aug 19 00:14:20.851932 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 00:14:20.854304 systemd-logind[1978]: Session 14 logged out. Waiting for processes to exit. Aug 19 00:14:20.857231 systemd-logind[1978]: Removed session 14. Aug 19 00:14:21.875950 containerd[1995]: time="2025-08-19T00:14:21.875888782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\" id:\"5796c68c2659a502fcacd68bfdebdc27ac595ad5745cbf510917b04bc43ec375\" pid:5966 exited_at:{seconds:1755562461 nanos:875390986}" Aug 19 00:14:21.912530 kubelet[3316]: I0819 00:14:21.912412 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wwgr9" podStartSLOduration=30.574271917 podStartE2EDuration="51.912387251s" podCreationTimestamp="2025-08-19 00:13:30 +0000 UTC" firstStartedPulling="2025-08-19 00:13:57.132640423 +0000 UTC m=+57.999109213" lastFinishedPulling="2025-08-19 00:14:18.470755757 +0000 UTC m=+79.337224547" observedRunningTime="2025-08-19 00:14:19.200580341 +0000 UTC m=+80.067049443" watchObservedRunningTime="2025-08-19 00:14:21.912387251 +0000 UTC m=+82.778856041" Aug 19 00:14:23.963618 containerd[1995]: time="2025-08-19T00:14:23.963555493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" id:\"82f9a993a6f846d3a8d8234751f13167d10e9fd0d10479ce758c3c8fd8a50935\" pid:5991 exited_at:{seconds:1755562463 nanos:962818465}" Aug 19 00:14:25.877682 systemd[1]: Started sshd@14-172.31.18.236:22-147.75.109.163:55822.service - OpenSSH per-connection server daemon (147.75.109.163:55822). Aug 19 00:14:26.073247 sshd[6004]: Accepted publickey for core from 147.75.109.163 port 55822 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:26.076478 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:26.089388 systemd-logind[1978]: New session 15 of user core. Aug 19 00:14:26.092424 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 00:14:26.363335 sshd[6007]: Connection closed by 147.75.109.163 port 55822 Aug 19 00:14:26.365528 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:26.377413 systemd[1]: sshd@14-172.31.18.236:22-147.75.109.163:55822.service: Deactivated successfully. Aug 19 00:14:26.382615 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 00:14:26.385060 systemd-logind[1978]: Session 15 logged out. Waiting for processes to exit. Aug 19 00:14:26.391115 systemd-logind[1978]: Removed session 15. Aug 19 00:14:31.281269 kubelet[3316]: I0819 00:14:31.281203 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:31.404596 systemd[1]: Started sshd@15-172.31.18.236:22-147.75.109.163:56708.service - OpenSSH per-connection server daemon (147.75.109.163:56708). Aug 19 00:14:31.648092 sshd[6021]: Accepted publickey for core from 147.75.109.163 port 56708 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:31.652996 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:31.668191 systemd-logind[1978]: New session 16 of user core. Aug 19 00:14:31.679446 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 00:14:31.947843 kubelet[3316]: I0819 00:14:31.947700 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:14:32.003518 sshd[6026]: Connection closed by 147.75.109.163 port 56708 Aug 19 00:14:32.005001 sshd-session[6021]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:32.022364 systemd[1]: sshd@15-172.31.18.236:22-147.75.109.163:56708.service: Deactivated successfully. Aug 19 00:14:32.031070 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 00:14:32.036262 systemd-logind[1978]: Session 16 logged out. Waiting for processes to exit. Aug 19 00:14:32.061512 systemd[1]: Started sshd@16-172.31.18.236:22-147.75.109.163:56720.service - OpenSSH per-connection server daemon (147.75.109.163:56720). Aug 19 00:14:32.064389 systemd-logind[1978]: Removed session 16. Aug 19 00:14:32.302299 sshd[6038]: Accepted publickey for core from 147.75.109.163 port 56720 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:32.303789 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:32.317592 systemd-logind[1978]: New session 17 of user core. Aug 19 00:14:32.326792 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 00:14:33.043983 sshd[6043]: Connection closed by 147.75.109.163 port 56720 Aug 19 00:14:33.045628 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:33.054628 systemd[1]: sshd@16-172.31.18.236:22-147.75.109.163:56720.service: Deactivated successfully. Aug 19 00:14:33.062087 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 00:14:33.065319 systemd-logind[1978]: Session 17 logged out. Waiting for processes to exit. Aug 19 00:14:33.090818 systemd[1]: Started sshd@17-172.31.18.236:22-147.75.109.163:56732.service - OpenSSH per-connection server daemon (147.75.109.163:56732). Aug 19 00:14:33.094269 systemd-logind[1978]: Removed session 17. Aug 19 00:14:33.337430 sshd[6053]: Accepted publickey for core from 147.75.109.163 port 56732 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:33.341872 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:33.355074 systemd-logind[1978]: New session 18 of user core. Aug 19 00:14:33.367467 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 00:14:34.922842 sshd[6056]: Connection closed by 147.75.109.163 port 56732 Aug 19 00:14:34.924072 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:34.938731 systemd[1]: sshd@17-172.31.18.236:22-147.75.109.163:56732.service: Deactivated successfully. Aug 19 00:14:34.948340 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 00:14:34.952754 systemd-logind[1978]: Session 18 logged out. Waiting for processes to exit. Aug 19 00:14:34.979542 systemd[1]: Started sshd@18-172.31.18.236:22-147.75.109.163:56742.service - OpenSSH per-connection server daemon (147.75.109.163:56742). Aug 19 00:14:34.984580 systemd-logind[1978]: Removed session 18. Aug 19 00:14:35.207964 sshd[6077]: Accepted publickey for core from 147.75.109.163 port 56742 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:35.212925 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:35.225555 systemd-logind[1978]: New session 19 of user core. Aug 19 00:14:35.233463 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 00:14:35.867826 sshd[6082]: Connection closed by 147.75.109.163 port 56742 Aug 19 00:14:35.868985 sshd-session[6077]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:35.883272 systemd[1]: sshd@18-172.31.18.236:22-147.75.109.163:56742.service: Deactivated successfully. Aug 19 00:14:35.891916 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 00:14:35.895008 systemd-logind[1978]: Session 19 logged out. Waiting for processes to exit. Aug 19 00:14:35.919653 systemd[1]: Started sshd@19-172.31.18.236:22-147.75.109.163:56750.service - OpenSSH per-connection server daemon (147.75.109.163:56750). Aug 19 00:14:35.922784 systemd-logind[1978]: Removed session 19. Aug 19 00:14:36.119239 sshd[6092]: Accepted publickey for core from 147.75.109.163 port 56750 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:36.121523 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:36.130965 systemd-logind[1978]: New session 20 of user core. Aug 19 00:14:36.136395 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 00:14:36.417780 sshd[6095]: Connection closed by 147.75.109.163 port 56750 Aug 19 00:14:36.419429 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:36.426959 systemd-logind[1978]: Session 20 logged out. Waiting for processes to exit. Aug 19 00:14:36.427519 systemd[1]: sshd@19-172.31.18.236:22-147.75.109.163:56750.service: Deactivated successfully. Aug 19 00:14:36.432672 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 00:14:36.439204 systemd-logind[1978]: Removed session 20. Aug 19 00:14:38.133993 containerd[1995]: time="2025-08-19T00:14:38.133930139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\" id:\"f512dd2cac824a7a1ccb29253404e9c33023bf906e0a6849ca074d44667f6c6a\" pid:6120 exited_at:{seconds:1755562478 nanos:133531223}" Aug 19 00:14:41.459524 systemd[1]: Started sshd@20-172.31.18.236:22-147.75.109.163:50790.service - OpenSSH per-connection server daemon (147.75.109.163:50790). Aug 19 00:14:41.653767 sshd[6132]: Accepted publickey for core from 147.75.109.163 port 50790 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:41.656235 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:41.665533 systemd-logind[1978]: New session 21 of user core. Aug 19 00:14:41.671397 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 00:14:41.921552 sshd[6135]: Connection closed by 147.75.109.163 port 50790 Aug 19 00:14:41.922406 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:41.930103 systemd-logind[1978]: Session 21 logged out. Waiting for processes to exit. Aug 19 00:14:41.932098 systemd[1]: sshd@20-172.31.18.236:22-147.75.109.163:50790.service: Deactivated successfully. Aug 19 00:14:41.939210 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 00:14:41.943626 systemd-logind[1978]: Removed session 21. Aug 19 00:14:46.962741 systemd[1]: Started sshd@21-172.31.18.236:22-147.75.109.163:50798.service - OpenSSH per-connection server daemon (147.75.109.163:50798). Aug 19 00:14:47.154330 sshd[6149]: Accepted publickey for core from 147.75.109.163 port 50798 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:47.156881 sshd-session[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:47.165060 systemd-logind[1978]: New session 22 of user core. Aug 19 00:14:47.172399 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 00:14:47.416357 sshd[6152]: Connection closed by 147.75.109.163 port 50798 Aug 19 00:14:47.417286 sshd-session[6149]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:47.425446 systemd[1]: sshd@21-172.31.18.236:22-147.75.109.163:50798.service: Deactivated successfully. Aug 19 00:14:47.429214 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 00:14:47.431989 systemd-logind[1978]: Session 22 logged out. Waiting for processes to exit. Aug 19 00:14:47.435485 systemd-logind[1978]: Removed session 22. Aug 19 00:14:48.302075 containerd[1995]: time="2025-08-19T00:14:48.301984618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" id:\"d2d24e90558ae0dc69ccd4f346b802b315d17c4fa325a3a9b15eb50f93987f12\" pid:6175 exited_at:{seconds:1755562488 nanos:301595746}" Aug 19 00:14:52.131800 containerd[1995]: time="2025-08-19T00:14:52.131736709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\" id:\"fa38fa84c1761e54f2e0b04959eda7f73d46ada7ba9861fe719b533b43e8ef5b\" pid:6200 exited_at:{seconds:1755562492 nanos:130670617}" Aug 19 00:14:52.464236 systemd[1]: Started sshd@22-172.31.18.236:22-147.75.109.163:55504.service - OpenSSH per-connection server daemon (147.75.109.163:55504). Aug 19 00:14:52.687306 sshd[6213]: Accepted publickey for core from 147.75.109.163 port 55504 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:52.690919 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:52.702231 systemd-logind[1978]: New session 23 of user core. Aug 19 00:14:52.710742 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 00:14:53.007230 sshd[6216]: Connection closed by 147.75.109.163 port 55504 Aug 19 00:14:53.008544 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:53.018390 systemd-logind[1978]: Session 23 logged out. Waiting for processes to exit. Aug 19 00:14:53.020638 systemd[1]: sshd@22-172.31.18.236:22-147.75.109.163:55504.service: Deactivated successfully. Aug 19 00:14:53.029421 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 00:14:53.037094 systemd-logind[1978]: Removed session 23. Aug 19 00:14:58.051587 systemd[1]: Started sshd@23-172.31.18.236:22-147.75.109.163:54876.service - OpenSSH per-connection server daemon (147.75.109.163:54876). Aug 19 00:14:58.290255 sshd[6229]: Accepted publickey for core from 147.75.109.163 port 54876 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:14:58.294600 sshd-session[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:58.314182 systemd-logind[1978]: New session 24 of user core. Aug 19 00:14:58.318299 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 00:14:58.352874 containerd[1995]: time="2025-08-19T00:14:58.352772624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\" id:\"37a6c6def54896a38677ddee41eb58722c99e86492e320dec8c43d635d42a676\" pid:6244 exited_at:{seconds:1755562498 nanos:350540096}" Aug 19 00:14:58.606387 sshd[6250]: Connection closed by 147.75.109.163 port 54876 Aug 19 00:14:58.607049 sshd-session[6229]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:58.616323 systemd[1]: sshd@23-172.31.18.236:22-147.75.109.163:54876.service: Deactivated successfully. Aug 19 00:14:58.622056 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 00:14:58.628252 systemd-logind[1978]: Session 24 logged out. Waiting for processes to exit. Aug 19 00:14:58.631230 systemd-logind[1978]: Removed session 24. Aug 19 00:15:03.647572 systemd[1]: Started sshd@24-172.31.18.236:22-147.75.109.163:54878.service - OpenSSH per-connection server daemon (147.75.109.163:54878). Aug 19 00:15:03.889851 sshd[6268]: Accepted publickey for core from 147.75.109.163 port 54878 ssh2: RSA SHA256:sBLGzpnE33aQv8NNCAX2huSttHc5c+O1kSvjyxpGlz4 Aug 19 00:15:03.894408 sshd-session[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:03.912226 systemd-logind[1978]: New session 25 of user core. Aug 19 00:15:03.918030 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 00:15:04.247184 sshd[6271]: Connection closed by 147.75.109.163 port 54878 Aug 19 00:15:04.248898 sshd-session[6268]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:04.258232 systemd[1]: sshd@24-172.31.18.236:22-147.75.109.163:54878.service: Deactivated successfully. Aug 19 00:15:04.266259 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 00:15:04.277008 systemd-logind[1978]: Session 25 logged out. Waiting for processes to exit. Aug 19 00:15:04.279016 systemd-logind[1978]: Removed session 25. Aug 19 00:15:08.208251 containerd[1995]: time="2025-08-19T00:15:08.207116212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\" id:\"500ab5167dce6f29f5dc3d41c89d0874be69c7dc451c1a6a024549c9e99e2561\" pid:6296 exited_at:{seconds:1755562508 nanos:206678248}" Aug 19 00:15:18.302640 containerd[1995]: time="2025-08-19T00:15:18.302180295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" id:\"6ff57eae36a492c5b63c9ffd66238afed3012dcfa6ce5179ad2be84328816240\" pid:6323 exited_at:{seconds:1755562518 nanos:301738863}" Aug 19 00:15:18.458868 systemd[1]: cri-containerd-5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c.scope: Deactivated successfully. Aug 19 00:15:18.459491 systemd[1]: cri-containerd-5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c.scope: Consumed 4.107s CPU time, 60.8M memory peak, 192K read from disk. Aug 19 00:15:18.474326 containerd[1995]: time="2025-08-19T00:15:18.474244743Z" level=info msg="received exit event container_id:\"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\" id:\"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\" pid:3135 exit_status:1 exited_at:{seconds:1755562518 nanos:473333547}" Aug 19 00:15:18.476657 containerd[1995]: time="2025-08-19T00:15:18.476077683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\" id:\"5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c\" pid:3135 exit_status:1 exited_at:{seconds:1755562518 nanos:473333547}" Aug 19 00:15:18.541307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c-rootfs.mount: Deactivated successfully. Aug 19 00:15:19.415758 kubelet[3316]: I0819 00:15:19.414665 3316 scope.go:117] "RemoveContainer" containerID="5435f9a185dd562f4c2184ad16223b2b629832f6ec66d26519a3274d7a9d813c" Aug 19 00:15:19.419729 containerd[1995]: time="2025-08-19T00:15:19.419683000Z" level=info msg="CreateContainer within sandbox \"44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 19 00:15:19.439459 containerd[1995]: time="2025-08-19T00:15:19.439407052Z" level=info msg="Container c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:19.460606 containerd[1995]: time="2025-08-19T00:15:19.460459840Z" level=info msg="CreateContainer within sandbox \"44a67042b2cbf4e6748d9191b8df3275b9b6da4b4bca90e587948689bcb2efee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70\"" Aug 19 00:15:19.461373 containerd[1995]: time="2025-08-19T00:15:19.461290024Z" level=info msg="StartContainer for \"c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70\"" Aug 19 00:15:19.463886 containerd[1995]: time="2025-08-19T00:15:19.463824016Z" level=info msg="connecting to shim c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70" address="unix:///run/containerd/s/4f1343c0e9040ce84c1d77dc745f0362199fef1ff4a9070a98b94b0317231be0" protocol=ttrpc version=3 Aug 19 00:15:19.511454 systemd[1]: Started cri-containerd-c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70.scope - libcontainer container c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70. Aug 19 00:15:19.610870 containerd[1995]: time="2025-08-19T00:15:19.610814093Z" level=info msg="StartContainer for \"c16cb3c375a6c7e37cd319d0900511085342a275d0a76a9e7f55247b1171dc70\" returns successfully" Aug 19 00:15:19.750931 systemd[1]: cri-containerd-0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2.scope: Deactivated successfully. Aug 19 00:15:19.752700 systemd[1]: cri-containerd-0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2.scope: Consumed 25.707s CPU time, 108.7M memory peak, 788K read from disk. Aug 19 00:15:19.761075 containerd[1995]: time="2025-08-19T00:15:19.760971090Z" level=info msg="received exit event container_id:\"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\" id:\"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\" pid:3739 exit_status:1 exited_at:{seconds:1755562519 nanos:760113018}" Aug 19 00:15:19.761720 containerd[1995]: time="2025-08-19T00:15:19.761328450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\" id:\"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\" pid:3739 exit_status:1 exited_at:{seconds:1755562519 nanos:760113018}" Aug 19 00:15:19.811887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2-rootfs.mount: Deactivated successfully. Aug 19 00:15:20.428624 kubelet[3316]: I0819 00:15:20.428491 3316 scope.go:117] "RemoveContainer" containerID="0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2" Aug 19 00:15:20.432450 containerd[1995]: time="2025-08-19T00:15:20.432370937Z" level=info msg="CreateContainer within sandbox \"8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 19 00:15:20.456159 containerd[1995]: time="2025-08-19T00:15:20.453871061Z" level=info msg="Container 336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:20.482682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983967199.mount: Deactivated successfully. Aug 19 00:15:20.490014 containerd[1995]: time="2025-08-19T00:15:20.489938057Z" level=info msg="CreateContainer within sandbox \"8a4db5525e9c1edfed08c07261dee603db9c37999226ab17bc7342049816fa95\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\"" Aug 19 00:15:20.492544 containerd[1995]: time="2025-08-19T00:15:20.492489173Z" level=info msg="StartContainer for \"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\"" Aug 19 00:15:20.514761 containerd[1995]: time="2025-08-19T00:15:20.514583382Z" level=info msg="connecting to shim 336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885" address="unix:///run/containerd/s/b1493f5007efce826089642b618f1bea557183814686d2b0adfeb1f8aab8e70f" protocol=ttrpc version=3 Aug 19 00:15:20.584450 systemd[1]: Started cri-containerd-336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885.scope - libcontainer container 336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885. Aug 19 00:15:20.661624 containerd[1995]: time="2025-08-19T00:15:20.661542666Z" level=info msg="StartContainer for \"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\" returns successfully" Aug 19 00:15:21.893218 containerd[1995]: time="2025-08-19T00:15:21.893080448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f26552eafe1d9c8b58774e3366de8d0aa77041e41c192ccd327d56e19ffce3e7\" id:\"6f2bd043e57425e7881fea24b3a7b96f1aeca53a6ff877a3d497a599b76ecda1\" pid:6435 exited_at:{seconds:1755562521 nanos:892688876}" Aug 19 00:15:21.922649 kubelet[3316]: E0819 00:15:21.922549 3316 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 19 00:15:23.962494 containerd[1995]: time="2025-08-19T00:15:23.962422259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a2380b78ee93b4acf927b62d6341657e99933b1231b0ede787b4aa613f2d91\" id:\"5c75edc0cc4854929c7df3276ffc219bf2ed9d116da7aa97344203fa589f45c5\" pid:6460 exited_at:{seconds:1755562523 nanos:961770263}" Aug 19 00:15:24.850326 systemd[1]: cri-containerd-579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2.scope: Deactivated successfully. Aug 19 00:15:24.850875 systemd[1]: cri-containerd-579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2.scope: Consumed 4.917s CPU time, 19.8M memory peak, 64K read from disk. Aug 19 00:15:24.857434 containerd[1995]: time="2025-08-19T00:15:24.857343179Z" level=info msg="received exit event container_id:\"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\" id:\"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\" pid:3167 exit_status:1 exited_at:{seconds:1755562524 nanos:856563455}" Aug 19 00:15:24.858239 containerd[1995]: time="2025-08-19T00:15:24.858180911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\" id:\"579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2\" pid:3167 exit_status:1 exited_at:{seconds:1755562524 nanos:856563455}" Aug 19 00:15:24.903560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2-rootfs.mount: Deactivated successfully. Aug 19 00:15:25.453872 kubelet[3316]: I0819 00:15:25.453825 3316 scope.go:117] "RemoveContainer" containerID="579244556ed57e969ce985bbba1c07267d307f594d3d4af0082e7eff22d553d2" Aug 19 00:15:25.458256 containerd[1995]: time="2025-08-19T00:15:25.458115934Z" level=info msg="CreateContainer within sandbox \"58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 19 00:15:25.477418 containerd[1995]: time="2025-08-19T00:15:25.477358210Z" level=info msg="Container 83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:25.491278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712726812.mount: Deactivated successfully. Aug 19 00:15:25.498927 containerd[1995]: time="2025-08-19T00:15:25.498842998Z" level=info msg="CreateContainer within sandbox \"58184dd2d1da799efdff78e65e8e04d54b3ce9b43c6aad9d27103e86a7ac13cb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e\"" Aug 19 00:15:25.499791 containerd[1995]: time="2025-08-19T00:15:25.499720546Z" level=info msg="StartContainer for \"83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e\"" Aug 19 00:15:25.502507 containerd[1995]: time="2025-08-19T00:15:25.502444654Z" level=info msg="connecting to shim 83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e" address="unix:///run/containerd/s/f3cf8201308767a7237f2a6b86f57d1c5ee7bf672e99b2fe9e230ebc8c9c661e" protocol=ttrpc version=3 Aug 19 00:15:25.542441 systemd[1]: Started cri-containerd-83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e.scope - libcontainer container 83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e. Aug 19 00:15:25.623613 containerd[1995]: time="2025-08-19T00:15:25.623494595Z" level=info msg="StartContainer for \"83cd582a33671ce4d24a6cc88f714b2ffc558d32ddf852ee0ecf700c4d265c6e\" returns successfully" Aug 19 00:15:31.924068 kubelet[3316]: E0819 00:15:31.923861 3316 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 19 00:15:32.484720 systemd[1]: cri-containerd-336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885.scope: Deactivated successfully. Aug 19 00:15:32.486051 systemd[1]: cri-containerd-336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885.scope: Consumed 440ms CPU time, 38M memory peak, 1.2M read from disk. Aug 19 00:15:32.487572 containerd[1995]: time="2025-08-19T00:15:32.485853593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\" id:\"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\" pid:6402 exit_status:1 exited_at:{seconds:1755562532 nanos:485289473}" Aug 19 00:15:32.487572 containerd[1995]: time="2025-08-19T00:15:32.486850637Z" level=info msg="received exit event container_id:\"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\" id:\"336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885\" pid:6402 exit_status:1 exited_at:{seconds:1755562532 nanos:485289473}" Aug 19 00:15:32.529326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885-rootfs.mount: Deactivated successfully. Aug 19 00:15:33.488287 kubelet[3316]: I0819 00:15:33.488051 3316 scope.go:117] "RemoveContainer" containerID="0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2" Aug 19 00:15:33.488997 kubelet[3316]: I0819 00:15:33.488527 3316 scope.go:117] "RemoveContainer" containerID="336490e9d5401bee9f0be35858126dee00271edf5bcc51ff6286be94296b7885" Aug 19 00:15:33.489680 kubelet[3316]: E0819 00:15:33.489611 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-ktx2k_tigera-operator(38e8c82b-53e8-451a-9699-a12325927c91)\"" pod="tigera-operator/tigera-operator-747864d56d-ktx2k" podUID="38e8c82b-53e8-451a-9699-a12325927c91" Aug 19 00:15:33.492220 containerd[1995]: time="2025-08-19T00:15:33.492173994Z" level=info msg="RemoveContainer for \"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\"" Aug 19 00:15:33.502672 containerd[1995]: time="2025-08-19T00:15:33.502562706Z" level=info msg="RemoveContainer for \"0443c470bbfdcf4d45dd89bb440c437fc96f9c50eb7a88d80412116633e34ac2\" returns successfully" Aug 19 00:15:38.132994 containerd[1995]: time="2025-08-19T00:15:38.132931149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"655f14e0f0acc14b8041e11368dcf680c881dfb26adc9cfccaee6900b3f85816\" id:\"a26148d174fae6871b3725cfdff64b255e484955430a15452cff1e2be7b09aef\" pid:6564 exit_status:1 exited_at:{seconds:1755562538 nanos:132445557}"