Jan 17 12:00:30.230318 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 12:00:30.230364 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:00:30.230389 kernel: KASLR disabled due to lack of seed Jan 17 12:00:30.230405 kernel: efi: EFI v2.7 by EDK II Jan 17 12:00:30.230421 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 17 12:00:30.230437 kernel: ACPI: Early table checksum verification disabled Jan 17 12:00:30.230455 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 12:00:30.230470 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 12:00:30.230486 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:00:30.230502 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 17 12:00:30.230523 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:00:30.230539 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 12:00:30.230555 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 12:00:30.230570 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 12:00:30.230589 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:00:30.230610 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 12:00:30.230627 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 12:00:30.230644 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 12:00:30.230660 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 12:00:30.230676 kernel: printk: bootconsole [uart0] enabled Jan 17 12:00:30.230692 kernel: NUMA: Failed to initialise from firmware Jan 17 12:00:30.230709 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:00:30.230725 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 12:00:30.230742 kernel: Zone ranges: Jan 17 12:00:30.230758 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 12:00:30.230810 kernel: DMA32 empty Jan 17 12:00:30.230835 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 12:00:30.230852 kernel: Movable zone start for each node Jan 17 12:00:30.230868 kernel: Early memory node ranges Jan 17 12:00:30.230885 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 12:00:30.230901 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 12:00:30.230918 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 12:00:30.230935 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 12:00:30.230951 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 12:00:30.230967 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 12:00:30.230983 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 12:00:30.230999 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 12:00:30.231016 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:00:30.231037 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 12:00:30.231054 kernel: psci: probing for conduit method from ACPI. Jan 17 12:00:30.231078 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 12:00:30.231096 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:00:30.231113 kernel: psci: Trusted OS migration not required Jan 17 12:00:30.231135 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:00:30.231153 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:00:30.231170 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:00:30.231188 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:00:30.231205 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:00:30.231222 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:00:30.231239 kernel: CPU features: detected: Spectre-v2 Jan 17 12:00:30.231256 kernel: CPU features: detected: Spectre-v3a Jan 17 12:00:30.231273 kernel: CPU features: detected: Spectre-BHB Jan 17 12:00:30.231291 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 12:00:30.231308 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 12:00:30.231330 kernel: alternatives: applying boot alternatives Jan 17 12:00:30.231350 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:00:30.231369 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:00:30.231386 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:00:30.231404 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:00:30.231421 kernel: Fallback order for Node 0: 0 Jan 17 12:00:30.231439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 12:00:30.231456 kernel: Policy zone: Normal Jan 17 12:00:30.231473 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:00:30.231490 kernel: software IO TLB: area num 2. Jan 17 12:00:30.231508 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 12:00:30.231531 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 17 12:00:30.231549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:00:30.231566 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:00:30.231584 kernel: rcu: RCU event tracing is enabled. Jan 17 12:00:30.231602 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:00:30.231620 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:00:30.231638 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:00:30.231655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:00:30.231673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:00:30.231690 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:00:30.231707 kernel: GICv3: 96 SPIs implemented Jan 17 12:00:30.231730 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:00:30.231748 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:00:30.232870 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 12:00:30.232903 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 12:00:30.232921 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 12:00:30.232939 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:00:30.232957 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:00:30.232975 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 12:00:30.232994 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 12:00:30.233012 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 12:00:30.233031 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:00:30.233048 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 12:00:30.233078 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 12:00:30.233097 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 12:00:30.233116 kernel: Console: colour dummy device 80x25 Jan 17 12:00:30.233135 kernel: printk: console [tty1] enabled Jan 17 12:00:30.233153 kernel: ACPI: Core revision 20230628 Jan 17 12:00:30.233173 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 12:00:30.233191 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:00:30.233210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:00:30.233228 kernel: landlock: Up and running. Jan 17 12:00:30.233253 kernel: SELinux: Initializing. Jan 17 12:00:30.233274 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:00:30.233293 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:00:30.233312 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:00:30.233330 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:00:30.233348 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:00:30.233367 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:00:30.233390 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 12:00:30.233409 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 12:00:30.233433 kernel: Remapping and enabling EFI services. Jan 17 12:00:30.233452 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:00:30.233469 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:00:30.233488 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 12:00:30.233506 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 12:00:30.233523 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 12:00:30.233542 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:00:30.233562 kernel: SMP: Total of 2 processors activated. Jan 17 12:00:30.233580 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:00:30.233632 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 12:00:30.233652 kernel: CPU features: detected: CRC32 instructions Jan 17 12:00:30.233671 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:00:30.233702 kernel: alternatives: applying system-wide alternatives Jan 17 12:00:30.233726 kernel: devtmpfs: initialized Jan 17 12:00:30.233745 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:00:30.233788 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:00:30.233813 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:00:30.233833 kernel: SMBIOS 3.0.0 present. Jan 17 12:00:30.233853 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 12:00:30.233879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:00:30.233900 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:00:30.233920 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:00:30.233939 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:00:30.233958 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:00:30.233977 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Jan 17 12:00:30.233995 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:00:30.234019 kernel: cpuidle: using governor menu Jan 17 12:00:30.234038 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:00:30.234056 kernel: ASID allocator initialised with 65536 entries Jan 17 12:00:30.234075 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:00:30.234093 kernel: Serial: AMBA PL011 UART driver Jan 17 12:00:30.234112 kernel: Modules: 17520 pages in range for non-PLT usage Jan 17 12:00:30.234130 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:00:30.234149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:00:30.234168 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:00:30.234191 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:00:30.234210 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:00:30.234229 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:00:30.234247 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:00:30.234266 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:00:30.234285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:00:30.234303 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:00:30.234322 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:00:30.234340 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:00:30.234364 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:00:30.234383 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:00:30.234401 kernel: ACPI: Interpreter enabled Jan 17 12:00:30.234420 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:00:30.234438 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:00:30.234457 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 17 12:00:30.238930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:00:30.239211 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:00:30.239422 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:00:30.239620 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 17 12:00:30.239851 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 17 12:00:30.239879 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 12:00:30.239900 kernel: acpiphp: Slot [1] registered Jan 17 12:00:30.239919 kernel: acpiphp: Slot [2] registered Jan 17 12:00:30.239937 kernel: acpiphp: Slot [3] registered Jan 17 12:00:30.239956 kernel: acpiphp: Slot [4] registered Jan 17 12:00:30.239981 kernel: acpiphp: Slot [5] registered Jan 17 12:00:30.240000 kernel: acpiphp: Slot [6] registered Jan 17 12:00:30.240018 kernel: acpiphp: Slot [7] registered Jan 17 12:00:30.240036 kernel: acpiphp: Slot [8] registered Jan 17 12:00:30.240054 kernel: acpiphp: Slot [9] registered Jan 17 12:00:30.240072 kernel: acpiphp: Slot [10] registered Jan 17 12:00:30.240091 kernel: acpiphp: Slot [11] registered Jan 17 12:00:30.240109 kernel: acpiphp: Slot [12] registered Jan 17 12:00:30.240127 kernel: acpiphp: Slot [13] registered Jan 17 12:00:30.240146 kernel: acpiphp: Slot [14] registered Jan 17 12:00:30.240169 kernel: acpiphp: Slot [15] registered Jan 17 12:00:30.240188 kernel: acpiphp: Slot [16] registered Jan 17 12:00:30.240207 kernel: acpiphp: Slot [17] registered Jan 17 12:00:30.240226 kernel: acpiphp: Slot [18] registered Jan 17 12:00:30.240244 kernel: acpiphp: Slot [19] registered Jan 17 12:00:30.240263 kernel: acpiphp: Slot [20] registered Jan 17 12:00:30.240281 kernel: acpiphp: Slot [21] registered Jan 17 12:00:30.240300 kernel: acpiphp: Slot [22] registered Jan 17 12:00:30.240318 kernel: acpiphp: Slot [23] registered Jan 17 12:00:30.240341 kernel: acpiphp: Slot [24] registered Jan 17 12:00:30.240360 kernel: acpiphp: Slot [25] registered Jan 17 12:00:30.240378 kernel: acpiphp: Slot [26] registered Jan 17 12:00:30.240396 kernel: acpiphp: Slot [27] registered Jan 17 12:00:30.240414 kernel: acpiphp: Slot [28] registered Jan 17 12:00:30.240433 kernel: acpiphp: Slot [29] registered Jan 17 12:00:30.240451 kernel: acpiphp: Slot [30] registered Jan 17 12:00:30.240470 kernel: acpiphp: Slot [31] registered Jan 17 12:00:30.240488 kernel: PCI host bridge to bus 0000:00 Jan 17 12:00:30.244980 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 12:00:30.245219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:00:30.245410 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 12:00:30.245620 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 17 12:00:30.248073 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 12:00:30.248332 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 12:00:30.248542 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 12:00:30.250208 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:00:30.250467 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 12:00:30.250673 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:00:30.250943 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:00:30.251153 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 12:00:30.251359 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 12:00:30.251587 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 12:00:30.253860 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:00:30.254119 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 17 12:00:30.254331 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 17 12:00:30.254548 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 17 12:00:30.254757 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 17 12:00:30.256065 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 17 12:00:30.256273 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 12:00:30.256452 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:00:30.256634 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 12:00:30.256661 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:00:30.256680 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:00:30.256699 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:00:30.256718 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:00:30.256737 kernel: iommu: Default domain type: Translated Jan 17 12:00:30.256755 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:00:30.257813 kernel: efivars: Registered efivars operations Jan 17 12:00:30.257833 kernel: vgaarb: loaded Jan 17 12:00:30.257853 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:00:30.257872 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:00:30.257890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:00:30.257910 kernel: pnp: PnP ACPI init Jan 17 12:00:30.258132 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 12:00:30.258161 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:00:30.258187 kernel: NET: Registered PF_INET protocol family Jan 17 12:00:30.258207 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:00:30.258226 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:00:30.258244 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:00:30.258263 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:00:30.258806 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:00:30.258830 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:00:30.258849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:00:30.258868 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:00:30.258894 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:00:30.258913 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:00:30.258931 kernel: kvm [1]: HYP mode not available Jan 17 12:00:30.258949 kernel: Initialise system trusted keyrings Jan 17 12:00:30.258968 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:00:30.258986 kernel: Key type asymmetric registered Jan 17 12:00:30.259005 kernel: Asymmetric key parser 'x509' registered Jan 17 12:00:30.259023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:00:30.259042 kernel: io scheduler mq-deadline registered Jan 17 12:00:30.259065 kernel: io scheduler kyber registered Jan 17 12:00:30.259084 kernel: io scheduler bfq registered Jan 17 12:00:30.259322 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 12:00:30.259350 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:00:30.259369 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:00:30.259388 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 12:00:30.259407 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 12:00:30.259425 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:00:30.259450 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 12:00:30.259655 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 12:00:30.259682 kernel: printk: console [ttyS0] disabled Jan 17 12:00:30.259701 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 12:00:30.259720 kernel: printk: console [ttyS0] enabled Jan 17 12:00:30.259738 kernel: printk: bootconsole [uart0] disabled Jan 17 12:00:30.259756 kernel: thunder_xcv, ver 1.0 Jan 17 12:00:30.260865 kernel: thunder_bgx, ver 1.0 Jan 17 12:00:30.260886 kernel: nicpf, ver 1.0 Jan 17 12:00:30.260914 kernel: nicvf, ver 1.0 Jan 17 12:00:30.261153 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:00:30.261348 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:00:29 UTC (1737115229) Jan 17 12:00:30.261375 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:00:30.261395 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 12:00:30.261414 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:00:30.261433 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:00:30.261451 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:00:30.261478 kernel: Segment Routing with IPv6 Jan 17 12:00:30.261497 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:00:30.261515 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:00:30.261534 kernel: Key type dns_resolver registered Jan 17 12:00:30.261552 kernel: registered taskstats version 1 Jan 17 12:00:30.261571 kernel: Loading compiled-in X.509 certificates Jan 17 12:00:30.261608 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:00:30.261629 kernel: Key type .fscrypt registered Jan 17 12:00:30.261647 kernel: Key type fscrypt-provisioning registered Jan 17 12:00:30.261672 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:00:30.261691 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:00:30.261709 kernel: ima: No architecture policies found Jan 17 12:00:30.261728 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:00:30.261746 kernel: clk: Disabling unused clocks Jan 17 12:00:30.261962 kernel: Freeing unused kernel memory: 39360K Jan 17 12:00:30.261986 kernel: Run /init as init process Jan 17 12:00:30.262005 kernel: with arguments: Jan 17 12:00:30.262023 kernel: /init Jan 17 12:00:30.262041 kernel: with environment: Jan 17 12:00:30.262068 kernel: HOME=/ Jan 17 12:00:30.262086 kernel: TERM=linux Jan 17 12:00:30.262104 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:00:30.262127 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:00:30.262151 systemd[1]: Detected virtualization amazon. Jan 17 12:00:30.262171 systemd[1]: Detected architecture arm64. Jan 17 12:00:30.262190 systemd[1]: Running in initrd. Jan 17 12:00:30.262215 systemd[1]: No hostname configured, using default hostname. Jan 17 12:00:30.262235 systemd[1]: Hostname set to . Jan 17 12:00:30.262256 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:00:30.262275 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:00:30.262295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:00:30.262316 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:00:30.262337 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:00:30.262358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:00:30.262383 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:00:30.262405 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:00:30.262428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:00:30.262449 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:00:30.262470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:00:30.262490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:00:30.262510 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:00:30.262535 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:00:30.262555 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:00:30.262575 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:00:30.262595 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:00:30.262615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:00:30.262635 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:00:30.262655 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:00:30.262676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:00:30.262696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:00:30.262721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:00:30.262742 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:00:30.264807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:00:30.264855 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:00:30.264879 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:00:30.264899 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:00:30.264920 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:00:30.264941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:00:30.264972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:30.265043 systemd-journald[251]: Collecting audit messages is disabled. Jan 17 12:00:30.265090 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:00:30.265112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:00:30.265138 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:00:30.265161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:00:30.265182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:00:30.265204 systemd-journald[251]: Journal started Jan 17 12:00:30.265248 systemd-journald[251]: Runtime Journal (/run/log/journal/ec23f73a4730ecc6739fb4f071056d6c) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:00:30.273881 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:00:30.273952 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:00:30.225653 systemd-modules-load[252]: Inserted module 'overlay' Jan 17 12:00:30.277172 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 17 12:00:30.279450 kernel: Bridge firewalling registered Jan 17 12:00:30.295294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:00:30.302029 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:00:30.297704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:30.301686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:00:30.323436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:00:30.330996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:00:30.341022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:00:30.356748 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:30.365028 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:00:30.385060 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:30.397510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:00:30.407070 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:00:30.418165 dracut-cmdline[283]: dracut-dracut-053 Jan 17 12:00:30.428571 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:00:30.493906 systemd-resolved[293]: Positive Trust Anchors: Jan 17 12:00:30.495798 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:00:30.495866 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:00:30.575809 kernel: SCSI subsystem initialized Jan 17 12:00:30.581821 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:00:30.594812 kernel: iscsi: registered transport (tcp) Jan 17 12:00:30.618815 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:00:30.618893 kernel: QLogic iSCSI HBA Driver Jan 17 12:00:30.718823 kernel: random: crng init done Jan 17 12:00:30.719253 systemd-resolved[293]: Defaulting to hostname 'linux'. Jan 17 12:00:30.723614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:00:30.726220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:00:30.761945 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:00:30.774140 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:00:30.820288 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:00:30.820419 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:00:30.820450 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:00:30.890851 kernel: raid6: neonx8 gen() 6684 MB/s Jan 17 12:00:30.907831 kernel: raid6: neonx4 gen() 6446 MB/s Jan 17 12:00:30.924818 kernel: raid6: neonx2 gen() 5408 MB/s Jan 17 12:00:30.941824 kernel: raid6: neonx1 gen() 3940 MB/s Jan 17 12:00:30.958813 kernel: raid6: int64x8 gen() 3798 MB/s Jan 17 12:00:30.975817 kernel: raid6: int64x4 gen() 3704 MB/s Jan 17 12:00:30.992827 kernel: raid6: int64x2 gen() 3580 MB/s Jan 17 12:00:31.010634 kernel: raid6: int64x1 gen() 2752 MB/s Jan 17 12:00:31.010708 kernel: raid6: using algorithm neonx8 gen() 6684 MB/s Jan 17 12:00:31.028548 kernel: raid6: .... xor() 4902 MB/s, rmw enabled Jan 17 12:00:31.028646 kernel: raid6: using neon recovery algorithm Jan 17 12:00:31.037125 kernel: xor: measuring software checksum speed Jan 17 12:00:31.037199 kernel: 8regs : 10972 MB/sec Jan 17 12:00:31.038217 kernel: 32regs : 11940 MB/sec Jan 17 12:00:31.039376 kernel: arm64_neon : 9555 MB/sec Jan 17 12:00:31.039409 kernel: xor: using function: 32regs (11940 MB/sec) Jan 17 12:00:31.124807 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:00:31.143793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:00:31.155083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:00:31.197880 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 17 12:00:31.207486 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:00:31.222092 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:00:31.261508 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 17 12:00:31.327604 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:00:31.339142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:00:31.471498 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:00:31.486455 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:00:31.537747 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:00:31.544960 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:00:31.548232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:00:31.553151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:00:31.579433 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:00:31.629792 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:00:31.664644 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:00:31.664885 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 12:00:31.691028 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:00:31.691337 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:00:31.691600 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:67:cf:0e:bd:5b Jan 17 12:00:31.696452 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:00:31.717426 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:00:31.720377 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:31.724419 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:00:31.731896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:00:31.732246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:31.751812 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 12:00:31.751860 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:00:31.741862 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:31.755274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:31.765781 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:00:31.776915 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:00:31.776993 kernel: GPT:9289727 != 16777215 Jan 17 12:00:31.777037 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:00:31.777982 kernel: GPT:9289727 != 16777215 Jan 17 12:00:31.778050 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:00:31.779892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:31.792271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:31.803183 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:00:31.857878 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:31.903808 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (516) Jan 17 12:00:31.919725 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (530) Jan 17 12:00:31.984636 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:00:32.038036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:00:32.055112 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:00:32.071359 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:00:32.072211 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:00:32.087067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:00:32.106841 disk-uuid[659]: Primary Header is updated. Jan 17 12:00:32.106841 disk-uuid[659]: Secondary Entries is updated. Jan 17 12:00:32.106841 disk-uuid[659]: Secondary Header is updated. Jan 17 12:00:32.118815 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:32.130831 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:33.133872 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:33.135995 disk-uuid[660]: The operation has completed successfully. Jan 17 12:00:33.320020 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:00:33.323862 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:00:33.389043 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:00:33.397659 sh[919]: Success Jan 17 12:00:33.417931 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:00:33.546970 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:00:33.557011 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:00:33.567872 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:00:33.603928 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:00:33.604001 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:33.605731 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:00:33.605816 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:00:33.607951 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:00:33.736819 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:00:33.752100 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:00:33.754339 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:00:33.769207 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:00:33.777106 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:00:33.805585 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:33.805689 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:33.807412 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:00:33.814840 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:00:33.833119 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:00:33.838380 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:33.849714 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:00:33.863150 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:00:33.972178 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:00:33.987161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:00:34.040379 systemd-networkd[1124]: lo: Link UP Jan 17 12:00:34.040394 systemd-networkd[1124]: lo: Gained carrier Jan 17 12:00:34.043361 systemd-networkd[1124]: Enumeration completed Jan 17 12:00:34.043839 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:00:34.044261 systemd-networkd[1124]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:34.044268 systemd-networkd[1124]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:00:34.049913 systemd-networkd[1124]: eth0: Link UP Jan 17 12:00:34.049921 systemd-networkd[1124]: eth0: Gained carrier Jan 17 12:00:34.049938 systemd-networkd[1124]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:34.053246 systemd[1]: Reached target network.target - Network. Jan 17 12:00:34.080736 systemd-networkd[1124]: eth0: DHCPv4 address 172.31.23.128/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:00:34.257152 ignition[1042]: Ignition 2.19.0 Jan 17 12:00:34.258853 ignition[1042]: Stage: fetch-offline Jan 17 12:00:34.260859 ignition[1042]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:34.262701 ignition[1042]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:34.265158 ignition[1042]: Ignition finished successfully Jan 17 12:00:34.269094 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:00:34.285244 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:00:34.310841 ignition[1132]: Ignition 2.19.0 Jan 17 12:00:34.310881 ignition[1132]: Stage: fetch Jan 17 12:00:34.311665 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:34.312809 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:34.313033 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:34.328582 ignition[1132]: PUT result: OK Jan 17 12:00:34.331672 ignition[1132]: parsed url from cmdline: "" Jan 17 12:00:34.331706 ignition[1132]: no config URL provided Jan 17 12:00:34.331725 ignition[1132]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:00:34.331756 ignition[1132]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:00:34.331825 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:34.335687 ignition[1132]: PUT result: OK Jan 17 12:00:34.335885 ignition[1132]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:00:34.340521 ignition[1132]: GET result: OK Jan 17 12:00:34.340713 ignition[1132]: parsing config with SHA512: d4db4733eadb199a73ae2936497191159788e0eb70591230ec4a05d2e9bec7b15406aa55043e5df0e1be105d509f485c43ded215dd5442f426afd48de6df1554 Jan 17 12:00:34.354005 unknown[1132]: fetched base config from "system" Jan 17 12:00:34.354043 unknown[1132]: fetched base config from "system" Jan 17 12:00:34.354060 unknown[1132]: fetched user config from "aws" Jan 17 12:00:34.356497 ignition[1132]: fetch: fetch complete Jan 17 12:00:34.356511 ignition[1132]: fetch: fetch passed Jan 17 12:00:34.362843 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:00:34.356624 ignition[1132]: Ignition finished successfully Jan 17 12:00:34.378095 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:00:34.414912 ignition[1138]: Ignition 2.19.0 Jan 17 12:00:34.414944 ignition[1138]: Stage: kargs Jan 17 12:00:34.416788 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:34.416823 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:34.417691 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:34.423150 ignition[1138]: PUT result: OK Jan 17 12:00:34.429937 ignition[1138]: kargs: kargs passed Jan 17 12:00:34.430293 ignition[1138]: Ignition finished successfully Jan 17 12:00:34.435741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:00:34.447158 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:00:34.479544 ignition[1145]: Ignition 2.19.0 Jan 17 12:00:34.479577 ignition[1145]: Stage: disks Jan 17 12:00:34.480653 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:34.480684 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:34.481286 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:34.484675 ignition[1145]: PUT result: OK Jan 17 12:00:34.493835 ignition[1145]: disks: disks passed Jan 17 12:00:34.493974 ignition[1145]: Ignition finished successfully Jan 17 12:00:34.498038 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:00:34.502956 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:00:34.505938 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:00:34.510425 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:00:34.514936 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:00:34.517219 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:00:34.535240 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:00:34.592051 systemd-fsck[1153]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:00:34.597917 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:00:34.614593 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:00:34.716816 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:00:34.717785 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:00:34.721901 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:00:34.736954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:00:34.743991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:00:34.752454 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:00:34.752543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:00:34.752596 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:00:34.762633 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:00:34.778327 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:00:34.795808 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1172) Jan 17 12:00:34.802695 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:34.802832 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:34.804004 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:00:34.818149 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:00:34.820007 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:00:35.254400 initrd-setup-root[1196]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:00:35.264481 initrd-setup-root[1203]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:00:35.274476 initrd-setup-root[1210]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:00:35.275528 systemd-networkd[1124]: eth0: Gained IPv6LL Jan 17 12:00:35.297202 initrd-setup-root[1217]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:00:35.627467 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:00:35.636995 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:00:35.656949 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:00:35.672897 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:00:35.674880 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:35.709421 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:00:35.720829 ignition[1285]: INFO : Ignition 2.19.0 Jan 17 12:00:35.720829 ignition[1285]: INFO : Stage: mount Jan 17 12:00:35.724059 ignition[1285]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:35.724059 ignition[1285]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:35.728113 ignition[1285]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:35.731319 ignition[1285]: INFO : PUT result: OK Jan 17 12:00:35.736319 ignition[1285]: INFO : mount: mount passed Jan 17 12:00:35.736319 ignition[1285]: INFO : Ignition finished successfully Jan 17 12:00:35.741759 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:00:35.751948 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:00:35.770942 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:00:35.810815 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1296) Jan 17 12:00:35.814850 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:35.814922 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:35.816061 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:00:35.821810 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:00:35.825530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:00:35.862406 ignition[1313]: INFO : Ignition 2.19.0 Jan 17 12:00:35.862406 ignition[1313]: INFO : Stage: files Jan 17 12:00:35.865701 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:35.867661 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:35.869884 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:35.873574 ignition[1313]: INFO : PUT result: OK Jan 17 12:00:35.878529 ignition[1313]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:00:35.883215 ignition[1313]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:00:35.883215 ignition[1313]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:00:35.924564 ignition[1313]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:00:35.927481 ignition[1313]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:00:35.930423 unknown[1313]: wrote ssh authorized keys file for user: core Jan 17 12:00:35.934843 ignition[1313]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:00:35.938867 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:00:35.938867 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:00:35.938867 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:00:35.938867 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:00:36.026617 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:00:36.360174 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:00:36.363872 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:00:36.367175 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:00:36.367175 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:00:36.367175 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:00:36.367175 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:00:36.367175 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:00:36.367175 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:36.386864 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 17 12:00:36.906108 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:00:37.289181 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:37.289181 ignition[1313]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:00:37.296421 ignition[1313]: INFO : files: files passed Jan 17 12:00:37.338257 ignition[1313]: INFO : Ignition finished successfully Jan 17 12:00:37.329552 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:00:37.360342 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:00:37.369150 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:00:37.376713 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:00:37.378646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:00:37.420335 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:00:37.420335 initrd-setup-root-after-ignition[1341]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:00:37.428312 initrd-setup-root-after-ignition[1345]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:00:37.434698 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:00:37.440264 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:00:37.452055 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:00:37.515613 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:00:37.517581 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:00:37.523296 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:00:37.525383 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:00:37.528473 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:00:37.547094 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:00:37.575739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:00:37.593474 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:00:37.620424 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:00:37.625251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:00:37.630830 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:00:37.631963 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:00:37.632243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:00:37.637936 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:00:37.641652 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:00:37.645841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:00:37.648250 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:00:37.651870 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:00:37.661136 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:00:37.667325 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:00:37.670661 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:00:37.682388 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:00:37.685371 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:00:37.689309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:00:37.689632 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:00:37.698108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:00:37.702211 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:00:37.707356 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:00:37.709396 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:00:37.712844 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:00:37.714049 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:00:37.724403 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:00:37.725290 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:00:37.732619 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:00:37.732929 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:00:37.748037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:00:37.753200 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:00:37.753952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:00:37.774502 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:00:37.778548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:00:37.781055 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:00:37.789010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:00:37.802102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:00:37.821593 ignition[1365]: INFO : Ignition 2.19.0 Jan 17 12:00:37.831920 ignition[1365]: INFO : Stage: umount Jan 17 12:00:37.831920 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:37.831920 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:37.831920 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:37.831920 ignition[1365]: INFO : PUT result: OK Jan 17 12:00:37.853219 ignition[1365]: INFO : umount: umount passed Jan 17 12:00:37.853219 ignition[1365]: INFO : Ignition finished successfully Jan 17 12:00:37.843955 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:00:37.844220 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:00:37.866096 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:00:37.870179 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:00:37.870445 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:00:37.880175 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:00:37.880439 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:00:37.888688 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:00:37.888929 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:00:37.896382 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:00:37.896512 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:00:37.898585 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:00:37.898699 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:00:37.901468 systemd[1]: Stopped target network.target - Network. Jan 17 12:00:37.908300 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:00:37.908445 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:00:37.911491 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:00:37.914742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:00:37.917007 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:00:37.919428 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:00:37.921830 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:00:37.935502 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:00:37.935620 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:00:37.941683 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:00:37.941835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:00:37.945117 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:00:37.945235 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:00:37.947653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:00:37.947818 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:00:37.951316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:00:37.951435 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:00:37.958372 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:00:37.961552 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:00:37.980897 systemd-networkd[1124]: eth0: DHCPv6 lease lost Jan 17 12:00:37.985937 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:00:37.986201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:00:37.995528 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:00:37.996157 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:00:38.007336 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:00:38.007527 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:00:38.023932 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:00:38.028902 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:00:38.029058 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:00:38.031936 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:00:38.032057 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:38.034840 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:00:38.034968 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:00:38.038505 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:00:38.038625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:00:38.068534 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:00:38.093620 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:00:38.095204 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:00:38.103997 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:00:38.104510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:00:38.111722 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:00:38.112169 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:00:38.118901 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:00:38.119000 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:00:38.121551 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:00:38.121706 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:00:38.131048 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:00:38.131178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:00:38.137636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:00:38.137840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:38.151075 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:00:38.156089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:00:38.156240 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:00:38.160026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:00:38.160156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:38.195068 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:00:38.197404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:00:38.201881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:00:38.209080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:00:38.275689 systemd[1]: Switching root. Jan 17 12:00:38.316178 systemd-journald[251]: Journal stopped Jan 17 12:00:41.044034 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 17 12:00:41.044202 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:00:41.044254 kernel: SELinux: policy capability open_perms=1 Jan 17 12:00:41.044291 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:00:41.044324 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:00:41.044356 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:00:41.044388 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:00:41.044421 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:00:41.044461 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:00:41.044502 kernel: audit: type=1403 audit(1737115239.112:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:00:41.044546 systemd[1]: Successfully loaded SELinux policy in 57.607ms. Jan 17 12:00:41.044600 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.467ms. Jan 17 12:00:41.044639 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:00:41.044674 systemd[1]: Detected virtualization amazon. Jan 17 12:00:41.044707 systemd[1]: Detected architecture arm64. Jan 17 12:00:41.044743 systemd[1]: Detected first boot. Jan 17 12:00:41.049871 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:00:41.049954 zram_generator::config[1425]: No configuration found. Jan 17 12:00:41.049995 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:00:41.050042 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:00:41.050082 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:00:41.050125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:00:41.050166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:00:41.050208 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:00:41.050254 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:00:41.050298 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:00:41.050334 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:00:41.050370 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:00:41.050411 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:00:41.050449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:00:41.050485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:00:41.050517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:00:41.050552 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:00:41.050594 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:00:41.050631 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:00:41.050667 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:00:41.050701 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:00:41.050737 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:00:41.052481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:00:41.052550 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:00:41.052584 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:00:41.052627 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:00:41.052658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:00:41.052697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:00:41.052732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:00:41.052791 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:00:41.052832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:00:41.052878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:00:41.052912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:00:41.052944 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:00:41.052976 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:00:41.053013 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:00:41.053045 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:00:41.053078 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:00:41.053113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:00:41.053148 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:00:41.053183 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:00:41.053215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:41.053248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:00:41.053286 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:00:41.053320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:41.053351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:00:41.053386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:41.053416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:00:41.053457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:41.053489 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:00:41.053539 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:00:41.053589 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:00:41.053623 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:00:41.053656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:00:41.053687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:00:41.053721 kernel: loop: module loaded Jan 17 12:00:41.053752 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:00:41.059923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:00:41.059980 kernel: fuse: init (API version 7.39) Jan 17 12:00:41.060018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:00:41.060055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:00:41.060100 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:00:41.060196 systemd-journald[1522]: Collecting audit messages is disabled. Jan 17 12:00:41.060265 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:00:41.060306 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:00:41.060340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:00:41.060373 systemd-journald[1522]: Journal started Jan 17 12:00:41.060428 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec23f73a4730ecc6739fb4f071056d6c) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:00:41.069107 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:00:41.067503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:00:41.096246 kernel: ACPI: bus type drm_connector registered Jan 17 12:00:41.071082 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:00:41.071515 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:00:41.076071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:41.076455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:41.079511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:41.079944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:41.083122 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:00:41.083548 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:00:41.086620 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:41.087142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:41.091709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:00:41.095152 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:00:41.120007 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:00:41.127387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:00:41.132705 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:00:41.166663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:00:41.185003 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:00:41.196075 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:00:41.206937 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:00:41.210007 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:00:41.229283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:00:41.244155 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:00:41.247015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:00:41.260335 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:00:41.265021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:00:41.281930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:00:41.301186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:00:41.310492 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:00:41.319227 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:00:41.325879 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:00:41.342978 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:00:41.352938 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec23f73a4730ecc6739fb4f071056d6c is 52.059ms for 895 entries. Jan 17 12:00:41.352938 systemd-journald[1522]: System Journal (/var/log/journal/ec23f73a4730ecc6739fb4f071056d6c) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:00:41.416199 systemd-journald[1522]: Received client request to flush runtime journal. Jan 17 12:00:41.423987 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:00:41.458608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:41.471459 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. Jan 17 12:00:41.471506 systemd-tmpfiles[1577]: ACLs are not supported, ignoring. Jan 17 12:00:41.484630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:00:41.499036 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:00:41.508056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:00:41.532195 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:00:41.563050 udevadm[1592]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:00:41.631108 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:00:41.642293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:00:41.680026 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jan 17 12:00:41.680602 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jan 17 12:00:41.691698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:00:42.350412 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:00:42.365167 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:00:42.429589 systemd-udevd[1605]: Using default interface naming scheme 'v255'. Jan 17 12:00:42.518226 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:00:42.530162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:00:42.575133 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:00:42.695047 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:00:42.738985 (udev-worker)[1611]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:00:42.766562 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:00:42.919607 systemd-networkd[1608]: lo: Link UP Jan 17 12:00:42.920215 systemd-networkd[1608]: lo: Gained carrier Jan 17 12:00:42.924166 systemd-networkd[1608]: Enumeration completed Jan 17 12:00:42.925868 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:00:42.930997 systemd-networkd[1608]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:42.931006 systemd-networkd[1608]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:00:42.933562 systemd-networkd[1608]: eth0: Link UP Jan 17 12:00:42.934168 systemd-networkd[1608]: eth0: Gained carrier Jan 17 12:00:42.934335 systemd-networkd[1608]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:42.936207 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:00:42.944894 systemd-networkd[1608]: eth0: DHCPv4 address 172.31.23.128/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:00:43.012114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:43.029804 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1625) Jan 17 12:00:43.226686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:43.248367 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:00:43.303066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:00:43.313126 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:00:43.357855 lvm[1734]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:00:43.400848 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:00:43.404435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:00:43.416105 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:00:43.428419 lvm[1737]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:00:43.469434 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:00:43.472454 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:00:43.475179 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:00:43.475348 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:00:43.477885 systemd[1]: Reached target machines.target - Containers. Jan 17 12:00:43.482009 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:00:43.491089 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:00:43.503076 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:00:43.506006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:43.509132 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:00:43.522070 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:00:43.534214 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:00:43.543907 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:00:43.582884 kernel: loop0: detected capacity change from 0 to 114432 Jan 17 12:00:43.585681 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:00:43.603265 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:00:43.605955 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:00:43.705048 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:00:43.743831 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 12:00:43.849820 kernel: loop2: detected capacity change from 0 to 194512 Jan 17 12:00:43.908819 kernel: loop3: detected capacity change from 0 to 52536 Jan 17 12:00:43.992801 kernel: loop4: detected capacity change from 0 to 114432 Jan 17 12:00:44.006807 kernel: loop5: detected capacity change from 0 to 114328 Jan 17 12:00:44.022801 kernel: loop6: detected capacity change from 0 to 194512 Jan 17 12:00:44.055818 kernel: loop7: detected capacity change from 0 to 52536 Jan 17 12:00:44.073587 (sd-merge)[1759]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:00:44.074694 (sd-merge)[1759]: Merged extensions into '/usr'. Jan 17 12:00:44.083465 systemd[1]: Reloading requested from client PID 1745 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:00:44.083490 systemd[1]: Reloading... Jan 17 12:00:44.205822 zram_generator::config[1787]: No configuration found. Jan 17 12:00:44.525619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:00:44.554961 systemd-networkd[1608]: eth0: Gained IPv6LL Jan 17 12:00:44.679755 systemd[1]: Reloading finished in 595 ms. Jan 17 12:00:44.707960 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:00:44.711481 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:00:44.727302 systemd[1]: Starting ensure-sysext.service... Jan 17 12:00:44.742143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:00:44.759185 systemd[1]: Reloading requested from client PID 1846 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:00:44.759234 systemd[1]: Reloading... Jan 17 12:00:44.799969 systemd-tmpfiles[1847]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:00:44.800964 systemd-tmpfiles[1847]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:00:44.803693 systemd-tmpfiles[1847]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:00:44.804527 systemd-tmpfiles[1847]: ACLs are not supported, ignoring. Jan 17 12:00:44.804868 systemd-tmpfiles[1847]: ACLs are not supported, ignoring. Jan 17 12:00:44.813619 systemd-tmpfiles[1847]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:00:44.813647 systemd-tmpfiles[1847]: Skipping /boot Jan 17 12:00:44.851481 systemd-tmpfiles[1847]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:00:44.851513 systemd-tmpfiles[1847]: Skipping /boot Jan 17 12:00:44.952146 zram_generator::config[1879]: No configuration found. Jan 17 12:00:44.983193 ldconfig[1741]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:00:45.255073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:00:45.404068 systemd[1]: Reloading finished in 644 ms. Jan 17 12:00:45.429016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:00:45.443054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:00:45.468299 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:00:45.477093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:00:45.490057 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:00:45.503210 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:00:45.512094 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:00:45.538162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:45.551278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:45.567346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:45.585351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:45.590178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:45.601391 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:00:45.606072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:45.608552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:45.622078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:45.623105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:45.637968 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:45.643721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:45.654907 augenrules[1967]: No rules Jan 17 12:00:45.658574 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:00:45.670056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:45.681030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:45.693984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:45.715096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:45.719130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:45.738359 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:00:45.756375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:45.759523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:45.767271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:45.767665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:45.773541 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:45.774348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:45.790654 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:00:45.817042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:45.830312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:45.847880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:00:45.858310 systemd-resolved[1948]: Positive Trust Anchors: Jan 17 12:00:45.858646 systemd-resolved[1948]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:00:45.858711 systemd-resolved[1948]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:00:45.868508 systemd-resolved[1948]: Defaulting to hostname 'linux'. Jan 17 12:00:45.875103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:45.894605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:45.897688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:45.901186 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:00:45.906664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:00:45.917818 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:00:45.921933 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:00:45.925962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:45.926337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:45.930079 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:00:45.930477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:00:45.934896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:45.935495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:45.940033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:45.940601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:45.948525 systemd[1]: Finished ensure-sysext.service. Jan 17 12:00:45.963293 systemd[1]: Reached target network.target - Network. Jan 17 12:00:45.965240 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:00:45.967557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:00:45.970256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:00:45.970381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:00:45.970424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:00:45.970473 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:00:45.973964 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:00:45.976656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:00:45.979671 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:00:45.982421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:00:45.984866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:00:45.987446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:00:45.987504 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:00:45.989460 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:00:45.992824 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:00:45.998381 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:00:46.003179 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:00:46.007322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:00:46.009877 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:00:46.012057 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:00:46.014493 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:00:46.014831 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:00:46.015029 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:00:46.023022 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:00:46.031657 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:00:46.043534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:00:46.053001 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:00:46.061119 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:00:46.063497 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:00:46.074016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:00:46.095617 jq[2017]: false Jan 17 12:00:46.096066 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:00:46.120079 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:00:46.135081 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:00:46.162093 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:00:46.172658 dbus-daemon[2016]: [system] SELinux support is enabled Jan 17 12:00:46.175978 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:00:46.203274 dbus-daemon[2016]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1608 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:00:46.197242 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:00:46.214241 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:00:46.233799 extend-filesystems[2018]: Found loop4 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found loop5 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found loop6 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found loop7 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p1 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p2 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p3 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found usr Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p4 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p6 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p7 Jan 17 12:00:46.245360 extend-filesystems[2018]: Found nvme0n1p9 Jan 17 12:00:46.245360 extend-filesystems[2018]: Checking size of /dev/nvme0n1p9 Jan 17 12:00:46.238685 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:00:46.243630 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:00:46.280699 ntpd[2022]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:00:46.283374 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:00:46.283713 ntpd[2022]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:00:46.284115 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:00:46.284197 ntpd[2022]: ---------------------------------------------------- Jan 17 12:00:46.284316 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: ---------------------------------------------------- Jan 17 12:00:46.284389 ntpd[2022]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:00:46.284488 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:00:46.284556 ntpd[2022]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:00:46.284653 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:00:46.284722 ntpd[2022]: corporation. Support and training for ntp-4 are Jan 17 12:00:46.284873 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: corporation. Support and training for ntp-4 are Jan 17 12:00:46.284945 ntpd[2022]: available at https://www.nwtime.org/support Jan 17 12:00:46.285045 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: available at https://www.nwtime.org/support Jan 17 12:00:46.285131 ntpd[2022]: ---------------------------------------------------- Jan 17 12:00:46.285233 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: ---------------------------------------------------- Jan 17 12:00:46.287651 ntpd[2022]: proto: precision = 0.096 usec (-23) Jan 17 12:00:46.289813 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: proto: precision = 0.096 usec (-23) Jan 17 12:00:46.291850 ntpd[2022]: basedate set to 2025-01-05 Jan 17 12:00:46.294067 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:00:46.297083 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: basedate set to 2025-01-05 Jan 17 12:00:46.297083 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: gps base set to 2025-01-05 (week 2348) Jan 17 12:00:46.291891 ntpd[2022]: gps base set to 2025-01-05 (week 2348) Jan 17 12:00:46.300596 ntpd[2022]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:00:46.301403 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:00:46.301545 ntpd[2022]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:00:46.303694 ntpd[2022]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:00:46.304447 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:00:46.304447 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:00:46.304610 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:00:46.306836 ntpd[2022]: Listen normally on 3 eth0 172.31.23.128:123 Jan 17 12:00:46.309847 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listen normally on 3 eth0 172.31.23.128:123 Jan 17 12:00:46.309847 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listen normally on 4 lo [::1]:123 Jan 17 12:00:46.309847 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listen normally on 5 eth0 [fe80::467:cfff:fe0e:bd5b%2]:123 Jan 17 12:00:46.309847 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: Listening on routing socket on fd #22 for interface updates Jan 17 12:00:46.306982 ntpd[2022]: Listen normally on 4 lo [::1]:123 Jan 17 12:00:46.307066 ntpd[2022]: Listen normally on 5 eth0 [fe80::467:cfff:fe0e:bd5b%2]:123 Jan 17 12:00:46.307143 ntpd[2022]: Listening on routing socket on fd #22 for interface updates Jan 17 12:00:46.312341 ntpd[2022]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:46.312540 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:46.312674 ntpd[2022]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:46.312805 ntpd[2022]: 17 Jan 12:00:46 ntpd[2022]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:46.318441 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:00:46.354804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:00:46.357417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:00:46.368798 extend-filesystems[2018]: Resized partition /dev/nvme0n1p9 Jan 17 12:00:46.386811 jq[2048]: true Jan 17 12:00:46.380175 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:00:46.380716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:00:46.413034 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:00:46.418668 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:00:46.419246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:00:46.434138 extend-filesystems[2063]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:00:46.448979 update_engine[2045]: I20250117 12:00:46.431108 2045 main.cc:92] Flatcar Update Engine starting Jan 17 12:00:46.459348 update_engine[2045]: I20250117 12:00:46.454567 2045 update_check_scheduler.cc:74] Next update check in 2m4s Jan 17 12:00:46.481473 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:00:46.519360 (ntainerd)[2073]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:00:46.540600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:00:46.540722 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:00:46.548440 dbus-daemon[2016]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:00:46.548935 tar[2061]: linux-arm64/helm Jan 17 12:00:46.543943 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:00:46.543989 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:00:46.549252 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:00:46.571967 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:00:46.579545 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:00:46.584109 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:00:46.603129 jq[2066]: true Jan 17 12:00:46.655464 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:00:46.678236 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:00:46.700930 coreos-metadata[2014]: Jan 17 12:00:46.700 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:00:46.710230 coreos-metadata[2014]: Jan 17 12:00:46.705 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:00:46.722069 coreos-metadata[2014]: Jan 17 12:00:46.715 INFO Fetch successful Jan 17 12:00:46.722069 coreos-metadata[2014]: Jan 17 12:00:46.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:00:46.722069 coreos-metadata[2014]: Jan 17 12:00:46.721 INFO Fetch successful Jan 17 12:00:46.722069 coreos-metadata[2014]: Jan 17 12:00:46.721 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:00:46.724784 coreos-metadata[2014]: Jan 17 12:00:46.724 INFO Fetch successful Jan 17 12:00:46.724784 coreos-metadata[2014]: Jan 17 12:00:46.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:00:46.726453 coreos-metadata[2014]: Jan 17 12:00:46.726 INFO Fetch successful Jan 17 12:00:46.726453 coreos-metadata[2014]: Jan 17 12:00:46.726 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:00:46.733203 coreos-metadata[2014]: Jan 17 12:00:46.732 INFO Fetch failed with 404: resource not found Jan 17 12:00:46.733203 coreos-metadata[2014]: Jan 17 12:00:46.732 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:00:46.747110 coreos-metadata[2014]: Jan 17 12:00:46.743 INFO Fetch successful Jan 17 12:00:46.747110 coreos-metadata[2014]: Jan 17 12:00:46.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:00:46.747110 coreos-metadata[2014]: Jan 17 12:00:46.746 INFO Fetch successful Jan 17 12:00:46.747110 coreos-metadata[2014]: Jan 17 12:00:46.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:00:46.752231 coreos-metadata[2014]: Jan 17 12:00:46.751 INFO Fetch successful Jan 17 12:00:46.752231 coreos-metadata[2014]: Jan 17 12:00:46.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:00:46.773478 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:00:46.773617 coreos-metadata[2014]: Jan 17 12:00:46.768 INFO Fetch successful Jan 17 12:00:46.773617 coreos-metadata[2014]: Jan 17 12:00:46.768 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:00:46.801941 coreos-metadata[2014]: Jan 17 12:00:46.773 INFO Fetch successful Jan 17 12:00:46.811817 extend-filesystems[2063]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:00:46.811817 extend-filesystems[2063]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:00:46.811817 extend-filesystems[2063]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:00:46.840234 extend-filesystems[2018]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: Initializing new seelog logger Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: New Seelog Logger Creation Complete Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 processing appconfig overrides Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 processing appconfig overrides Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 processing appconfig overrides Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025-01-17 12:00:46 INFO Proxy environment variables: Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:46.843951 amazon-ssm-agent[2095]: 2025/01/17 12:00:46 processing appconfig overrides Jan 17 12:00:46.852599 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:00:46.853153 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:00:46.933792 amazon-ssm-agent[2095]: 2025-01-17 12:00:46 INFO https_proxy: Jan 17 12:00:46.949396 bash[2139]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:00:46.966816 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2112) Jan 17 12:00:46.990507 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:00:47.008132 systemd[1]: Starting sshkeys.service... Jan 17 12:00:47.032750 amazon-ssm-agent[2095]: 2025-01-17 12:00:46 INFO http_proxy: Jan 17 12:00:47.034714 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:00:47.037271 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:00:47.104946 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:00:47.115950 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:00:47.140799 amazon-ssm-agent[2095]: 2025-01-17 12:00:46 INFO no_proxy: Jan 17 12:00:47.214128 containerd[2073]: time="2025-01-17T12:00:47.213982607Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:00:47.240290 amazon-ssm-agent[2095]: 2025-01-17 12:00:46 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:00:47.247600 systemd-logind[2042]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:00:47.247644 systemd-logind[2042]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 12:00:47.249213 systemd-logind[2042]: New seat seat0. Jan 17 12:00:47.254410 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:00:47.261841 locksmithd[2085]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:00:47.341557 amazon-ssm-agent[2095]: 2025-01-17 12:00:46 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:00:47.414120 containerd[2073]: time="2025-01-17T12:00:47.413943684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.463887 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO Agent will take identity from EC2 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471322705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471401377Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471442921Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471754117Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471816685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471953581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.471983509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.472359421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.472394689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.472425577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:47.471410 containerd[2073]: time="2025-01-17T12:00:47.472458085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.474565 containerd[2073]: time="2025-01-17T12:00:47.472626577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.485005 containerd[2073]: time="2025-01-17T12:00:47.484193617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:47.485005 containerd[2073]: time="2025-01-17T12:00:47.484527877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:47.485005 containerd[2073]: time="2025-01-17T12:00:47.484561441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:00:47.485005 containerd[2073]: time="2025-01-17T12:00:47.484743613Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:00:47.485005 containerd[2073]: time="2025-01-17T12:00:47.484873297Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:00:47.505953 containerd[2073]: time="2025-01-17T12:00:47.505866301Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:00:47.506101 containerd[2073]: time="2025-01-17T12:00:47.505977865Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:00:47.506101 containerd[2073]: time="2025-01-17T12:00:47.506018521Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:00:47.506101 containerd[2073]: time="2025-01-17T12:00:47.506054389Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:00:47.506101 containerd[2073]: time="2025-01-17T12:00:47.506088169Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.506350093Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.506944321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507158041Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507191209Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507221665Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507252721Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507285457Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507315937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507359 containerd[2073]: time="2025-01-17T12:00:47.507356353Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507390289Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507419821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507448765Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507476257Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507517621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507560893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507592033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507626497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507656233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507688717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.507818 containerd[2073]: time="2025-01-17T12:00:47.507732097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.514812013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.514905037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.514946137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.514978801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515010073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515048365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515086285Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515140081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515170417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515216593Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515368969Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515409385Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:00:47.519802 containerd[2073]: time="2025-01-17T12:00:47.515436505Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:00:47.520446 containerd[2073]: time="2025-01-17T12:00:47.515468581Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:00:47.520446 containerd[2073]: time="2025-01-17T12:00:47.515492569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.520446 containerd[2073]: time="2025-01-17T12:00:47.515521333Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:00:47.520446 containerd[2073]: time="2025-01-17T12:00:47.515546125Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:00:47.520446 containerd[2073]: time="2025-01-17T12:00:47.515572609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:00:47.520667 containerd[2073]: time="2025-01-17T12:00:47.516204721Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:00:47.520667 containerd[2073]: time="2025-01-17T12:00:47.516330205Z" level=info msg="Connect containerd service" Jan 17 12:00:47.520667 containerd[2073]: time="2025-01-17T12:00:47.516389053Z" level=info msg="using legacy CRI server" Jan 17 12:00:47.520667 containerd[2073]: time="2025-01-17T12:00:47.516408181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:00:47.520667 containerd[2073]: time="2025-01-17T12:00:47.516568333Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:00:47.520667 containerd[2073]: time="2025-01-17T12:00:47.517595173Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.521621677Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.521736697Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.521901973Z" level=info msg="Start subscribing containerd event" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.521974765Z" level=info msg="Start recovering state" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.522098281Z" level=info msg="Start event monitor" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.522124333Z" level=info msg="Start snapshots syncer" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.522145909Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.522164029Z" level=info msg="Start streaming server" Jan 17 12:00:47.525643 containerd[2073]: time="2025-01-17T12:00:47.522308425Z" level=info msg="containerd successfully booted in 0.309944s" Jan 17 12:00:47.522482 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:00:47.571928 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:00:47.652907 coreos-metadata[2173]: Jan 17 12:00:47.652 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:00:47.659327 coreos-metadata[2173]: Jan 17 12:00:47.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:00:47.664010 coreos-metadata[2173]: Jan 17 12:00:47.662 INFO Fetch successful Jan 17 12:00:47.664010 coreos-metadata[2173]: Jan 17 12:00:47.663 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:00:47.666536 coreos-metadata[2173]: Jan 17 12:00:47.665 INFO Fetch successful Jan 17 12:00:47.671835 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:00:47.673308 unknown[2173]: wrote ssh authorized keys file for user: core Jan 17 12:00:47.761646 update-ssh-keys[2249]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:00:47.777785 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:00:47.786715 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:00:47.801840 systemd[1]: Finished sshkeys.service. Jan 17 12:00:47.882506 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:00:47.902010 dbus-daemon[2016]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:00:47.902273 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:00:47.911917 dbus-daemon[2016]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2084 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:00:47.927905 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:00:47.984374 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 12:00:48.013703 polkitd[2261]: Started polkitd version 121 Jan 17 12:00:48.034209 polkitd[2261]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:00:48.034354 polkitd[2261]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:00:48.038198 polkitd[2261]: Finished loading, compiling and executing 2 rules Jan 17 12:00:48.039027 dbus-daemon[2016]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:00:48.039475 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:00:48.043089 polkitd[2261]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:00:48.084466 systemd-hostnamed[2084]: Hostname set to (transient) Jan 17 12:00:48.084688 systemd-resolved[1948]: System hostname changed to 'ip-172-31-23-128'. Jan 17 12:00:48.086528 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:00:48.188395 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:00:48.286484 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [Registrar] Starting registrar module Jan 17 12:00:48.387188 amazon-ssm-agent[2095]: 2025-01-17 12:00:47 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:00:48.516551 sshd_keygen[2064]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:00:48.621627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:00:48.643249 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:00:48.689728 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:00:48.691440 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:00:48.711233 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:00:48.747707 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:00:48.764488 amazon-ssm-agent[2095]: 2025-01-17 12:00:48 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:00:48.769372 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:00:48.784280 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:00:48.787249 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:00:48.818380 tar[2061]: linux-arm64/LICENSE Jan 17 12:00:48.818380 tar[2061]: linux-arm64/README.md Jan 17 12:00:48.828302 amazon-ssm-agent[2095]: 2025-01-17 12:00:48 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:00:48.828302 amazon-ssm-agent[2095]: 2025-01-17 12:00:48 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:00:48.828302 amazon-ssm-agent[2095]: 2025-01-17 12:00:48 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:00:48.849369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:00:48.863938 amazon-ssm-agent[2095]: 2025-01-17 12:00:48 INFO [CredentialRefresher] Next credential rotation will be in 30.758301869816666 minutes Jan 17 12:00:48.968080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:00:48.971698 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:00:48.976121 systemd[1]: Startup finished in 10.536s (kernel) + 9.915s (userspace) = 20.452s. Jan 17 12:00:48.985825 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:00:49.855615 amazon-ssm-agent[2095]: 2025-01-17 12:00:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:00:49.955926 amazon-ssm-agent[2095]: 2025-01-17 12:00:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2322) started Jan 17 12:00:50.057600 amazon-ssm-agent[2095]: 2025-01-17 12:00:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:00:50.161040 kubelet[2310]: E0117 12:00:50.160820 2310 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:00:50.166709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:00:50.167157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:00:53.326884 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:00:53.339204 systemd[1]: Started sshd@0-172.31.23.128:22-139.178.68.195:56678.service - OpenSSH per-connection server daemon (139.178.68.195:56678). Jan 17 12:00:53.536408 sshd[2333]: Accepted publickey for core from 139.178.68.195 port 56678 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:53.540963 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:53.560131 systemd-logind[2042]: New session 1 of user core. Jan 17 12:00:53.561585 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:00:53.571191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:00:53.598447 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:00:53.609416 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:00:53.629203 (systemd)[2339]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:00:53.845708 systemd[2339]: Queued start job for default target default.target. Jan 17 12:00:53.847104 systemd[2339]: Created slice app.slice - User Application Slice. Jan 17 12:00:53.847154 systemd[2339]: Reached target paths.target - Paths. Jan 17 12:00:53.847186 systemd[2339]: Reached target timers.target - Timers. Jan 17 12:00:53.855947 systemd[2339]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:00:53.870166 systemd[2339]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:00:53.870289 systemd[2339]: Reached target sockets.target - Sockets. Jan 17 12:00:53.870322 systemd[2339]: Reached target basic.target - Basic System. Jan 17 12:00:53.870418 systemd[2339]: Reached target default.target - Main User Target. Jan 17 12:00:53.870480 systemd[2339]: Startup finished in 229ms. Jan 17 12:00:53.871327 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:00:53.882440 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:00:54.029375 systemd[1]: Started sshd@1-172.31.23.128:22-139.178.68.195:56680.service - OpenSSH per-connection server daemon (139.178.68.195:56680). Jan 17 12:00:54.202980 sshd[2351]: Accepted publickey for core from 139.178.68.195 port 56680 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:54.205527 sshd[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:54.214215 systemd-logind[2042]: New session 2 of user core. Jan 17 12:00:54.221408 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:00:54.349117 sshd[2351]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:54.353735 systemd[1]: sshd@1-172.31.23.128:22-139.178.68.195:56680.service: Deactivated successfully. Jan 17 12:00:54.361305 systemd-logind[2042]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:00:54.361871 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:00:54.364486 systemd-logind[2042]: Removed session 2. Jan 17 12:00:54.384194 systemd[1]: Started sshd@2-172.31.23.128:22-139.178.68.195:56684.service - OpenSSH per-connection server daemon (139.178.68.195:56684). Jan 17 12:00:54.548493 sshd[2359]: Accepted publickey for core from 139.178.68.195 port 56684 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:54.551038 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:54.559096 systemd-logind[2042]: New session 3 of user core. Jan 17 12:00:54.569244 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:00:54.691044 sshd[2359]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:54.695651 systemd[1]: sshd@2-172.31.23.128:22-139.178.68.195:56684.service: Deactivated successfully. Jan 17 12:00:54.702067 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:00:54.702879 systemd-logind[2042]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:00:54.706282 systemd-logind[2042]: Removed session 3. Jan 17 12:00:54.723223 systemd[1]: Started sshd@3-172.31.23.128:22-139.178.68.195:35150.service - OpenSSH per-connection server daemon (139.178.68.195:35150). Jan 17 12:00:54.890184 sshd[2368]: Accepted publickey for core from 139.178.68.195 port 35150 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:54.892177 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:54.899862 systemd-logind[2042]: New session 4 of user core. Jan 17 12:00:54.907254 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:00:55.035891 sshd[2368]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:55.042485 systemd[1]: sshd@3-172.31.23.128:22-139.178.68.195:35150.service: Deactivated successfully. Jan 17 12:00:55.044058 systemd-logind[2042]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:00:55.048375 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:00:55.050711 systemd-logind[2042]: Removed session 4. Jan 17 12:00:55.069288 systemd[1]: Started sshd@4-172.31.23.128:22-139.178.68.195:35156.service - OpenSSH per-connection server daemon (139.178.68.195:35156). Jan 17 12:00:55.237385 sshd[2376]: Accepted publickey for core from 139.178.68.195 port 35156 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:55.240139 sshd[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:55.248672 systemd-logind[2042]: New session 5 of user core. Jan 17 12:00:55.255392 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:00:55.389359 sudo[2380]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:00:55.390041 sudo[2380]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:55.404834 sudo[2380]: pam_unix(sudo:session): session closed for user root Jan 17 12:00:55.429185 sshd[2376]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:55.436196 systemd[1]: sshd@4-172.31.23.128:22-139.178.68.195:35156.service: Deactivated successfully. Jan 17 12:00:55.442207 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:00:55.443851 systemd-logind[2042]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:00:55.445850 systemd-logind[2042]: Removed session 5. Jan 17 12:00:55.459256 systemd[1]: Started sshd@5-172.31.23.128:22-139.178.68.195:35170.service - OpenSSH per-connection server daemon (139.178.68.195:35170). Jan 17 12:00:55.636012 sshd[2385]: Accepted publickey for core from 139.178.68.195 port 35170 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:55.638709 sshd[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:55.645991 systemd-logind[2042]: New session 6 of user core. Jan 17 12:00:55.655323 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:00:55.762057 sudo[2390]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:00:55.762872 sudo[2390]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:55.768734 sudo[2390]: pam_unix(sudo:session): session closed for user root Jan 17 12:00:55.778820 sudo[2389]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:00:55.779454 sudo[2389]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:55.801703 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:00:55.824052 auditctl[2393]: No rules Jan 17 12:00:55.827377 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:00:55.828242 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:00:55.842886 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:00:55.890199 augenrules[2412]: No rules Jan 17 12:00:55.894402 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:00:55.898726 sudo[2389]: pam_unix(sudo:session): session closed for user root Jan 17 12:00:55.924065 sshd[2385]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:55.930275 systemd[1]: sshd@5-172.31.23.128:22-139.178.68.195:35170.service: Deactivated successfully. Jan 17 12:00:55.936643 systemd-logind[2042]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:00:55.937613 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:00:55.939383 systemd-logind[2042]: Removed session 6. Jan 17 12:00:55.958194 systemd[1]: Started sshd@6-172.31.23.128:22-139.178.68.195:35180.service - OpenSSH per-connection server daemon (139.178.68.195:35180). Jan 17 12:00:56.128458 sshd[2421]: Accepted publickey for core from 139.178.68.195 port 35180 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:56.131045 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:56.138862 systemd-logind[2042]: New session 7 of user core. Jan 17 12:00:56.148235 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:00:56.257289 sudo[2425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:00:56.258731 sudo[2425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:56.853244 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:00:56.853811 (dockerd)[2440]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:00:57.363152 dockerd[2440]: time="2025-01-17T12:00:57.363022159Z" level=info msg="Starting up" Jan 17 12:00:57.569480 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1229853800-merged.mount: Deactivated successfully. Jan 17 12:00:57.842302 dockerd[2440]: time="2025-01-17T12:00:57.841742515Z" level=info msg="Loading containers: start." Jan 17 12:00:58.062807 kernel: Initializing XFRM netlink socket Jan 17 12:00:58.136034 (udev-worker)[2461]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:00:58.235705 systemd-networkd[1608]: docker0: Link UP Jan 17 12:00:58.256190 dockerd[2440]: time="2025-01-17T12:00:58.256140978Z" level=info msg="Loading containers: done." Jan 17 12:00:58.282247 dockerd[2440]: time="2025-01-17T12:00:58.282165460Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:00:58.282582 dockerd[2440]: time="2025-01-17T12:00:58.282320769Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:00:58.282582 dockerd[2440]: time="2025-01-17T12:00:58.282514162Z" level=info msg="Daemon has completed initialization" Jan 17 12:00:58.343467 dockerd[2440]: time="2025-01-17T12:00:58.343204576Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:00:58.344474 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:00:58.558358 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2547872579-merged.mount: Deactivated successfully. Jan 17 12:00:59.691643 containerd[2073]: time="2025-01-17T12:00:59.691582262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:01:00.275222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:00.284354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:00.363711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466284107.mount: Deactivated successfully. Jan 17 12:01:00.697133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:00.715796 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:00.863468 kubelet[2613]: E0117 12:01:00.863390 2613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:00.876261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:00.876694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:02.117611 containerd[2073]: time="2025-01-17T12:01:02.117552734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:02.121337 containerd[2073]: time="2025-01-17T12:01:02.121277151Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=32202457" Jan 17 12:01:02.123118 containerd[2073]: time="2025-01-17T12:01:02.123046931Z" level=info msg="ImageCreate event name:\"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:02.131232 containerd[2073]: time="2025-01-17T12:01:02.131132720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:02.133649 containerd[2073]: time="2025-01-17T12:01:02.133350553Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"32199257\" in 2.441703314s" Jan 17 12:01:02.133649 containerd[2073]: time="2025-01-17T12:01:02.133408326Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\"" Jan 17 12:01:02.173154 containerd[2073]: time="2025-01-17T12:01:02.173097112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:01:03.914724 containerd[2073]: time="2025-01-17T12:01:03.914631248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:03.917039 containerd[2073]: time="2025-01-17T12:01:03.916961637Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=29381102" Jan 17 12:01:03.917816 containerd[2073]: time="2025-01-17T12:01:03.917511500Z" level=info msg="ImageCreate event name:\"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:03.925240 containerd[2073]: time="2025-01-17T12:01:03.925112402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:03.927740 containerd[2073]: time="2025-01-17T12:01:03.927667807Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"30784892\" in 1.754506031s" Jan 17 12:01:03.928153 containerd[2073]: time="2025-01-17T12:01:03.927841342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\"" Jan 17 12:01:03.966724 containerd[2073]: time="2025-01-17T12:01:03.966674101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:01:05.069309 containerd[2073]: time="2025-01-17T12:01:05.069212234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:05.071617 containerd[2073]: time="2025-01-17T12:01:05.071531085Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=15765672" Jan 17 12:01:05.072669 containerd[2073]: time="2025-01-17T12:01:05.072131349Z" level=info msg="ImageCreate event name:\"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:05.078734 containerd[2073]: time="2025-01-17T12:01:05.078612848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:05.082898 containerd[2073]: time="2025-01-17T12:01:05.081110853Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"17169480\" in 1.114374273s" Jan 17 12:01:05.082898 containerd[2073]: time="2025-01-17T12:01:05.081175625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\"" Jan 17 12:01:05.122671 containerd[2073]: time="2025-01-17T12:01:05.122334703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:01:06.418375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247615401.mount: Deactivated successfully. Jan 17 12:01:06.942885 containerd[2073]: time="2025-01-17T12:01:06.942809070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:06.944615 containerd[2073]: time="2025-01-17T12:01:06.944519709Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=25274682" Jan 17 12:01:06.946156 containerd[2073]: time="2025-01-17T12:01:06.946049021Z" level=info msg="ImageCreate event name:\"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:06.952228 containerd[2073]: time="2025-01-17T12:01:06.952119038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:06.954061 containerd[2073]: time="2025-01-17T12:01:06.953527150Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"25273701\" in 1.831123149s" Jan 17 12:01:06.954061 containerd[2073]: time="2025-01-17T12:01:06.953597193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\"" Jan 17 12:01:06.997968 containerd[2073]: time="2025-01-17T12:01:06.997912432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:01:07.569588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount38599615.mount: Deactivated successfully. Jan 17 12:01:08.784188 containerd[2073]: time="2025-01-17T12:01:08.784111001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:08.794584 containerd[2073]: time="2025-01-17T12:01:08.794031955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:01:08.802135 containerd[2073]: time="2025-01-17T12:01:08.802036668Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:08.818794 containerd[2073]: time="2025-01-17T12:01:08.818664978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:08.821434 containerd[2073]: time="2025-01-17T12:01:08.821214776Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.823238473s" Jan 17 12:01:08.821434 containerd[2073]: time="2025-01-17T12:01:08.821285047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:01:08.863164 containerd[2073]: time="2025-01-17T12:01:08.863046250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:01:09.941810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount920900849.mount: Deactivated successfully. Jan 17 12:01:09.948856 containerd[2073]: time="2025-01-17T12:01:09.948549478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:09.950634 containerd[2073]: time="2025-01-17T12:01:09.950540073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 17 12:01:09.952114 containerd[2073]: time="2025-01-17T12:01:09.952006209Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:09.957044 containerd[2073]: time="2025-01-17T12:01:09.956921418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:09.959308 containerd[2073]: time="2025-01-17T12:01:09.959069939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.095965328s" Jan 17 12:01:09.959308 containerd[2073]: time="2025-01-17T12:01:09.959145445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:01:10.001076 containerd[2073]: time="2025-01-17T12:01:10.001015794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:01:10.548719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472982239.mount: Deactivated successfully. Jan 17 12:01:11.025891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:01:11.043976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:11.505306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:11.520662 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:11.628087 kubelet[2799]: E0117 12:01:11.627986 2799 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:11.634743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:11.635391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:12.718653 containerd[2073]: time="2025-01-17T12:01:12.718299204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:12.720679 containerd[2073]: time="2025-01-17T12:01:12.720621801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 17 12:01:12.721621 containerd[2073]: time="2025-01-17T12:01:12.721012236Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:12.727630 containerd[2073]: time="2025-01-17T12:01:12.727528732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:12.730686 containerd[2073]: time="2025-01-17T12:01:12.730202024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.729116439s" Jan 17 12:01:12.730686 containerd[2073]: time="2025-01-17T12:01:12.730269774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 17 12:01:18.104495 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:01:20.768193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:20.779286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:20.829608 systemd[1]: Reloading requested from client PID 2891 ('systemctl') (unit session-7.scope)... Jan 17 12:01:20.829646 systemd[1]: Reloading... Jan 17 12:01:21.076822 zram_generator::config[2934]: No configuration found. Jan 17 12:01:21.336091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:21.501806 systemd[1]: Reloading finished in 671 ms. Jan 17 12:01:21.585026 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:01:21.585288 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:01:21.586085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:21.601381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:21.874140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:21.896644 (kubelet)[3004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:01:21.991653 kubelet[3004]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:21.991653 kubelet[3004]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:01:21.991653 kubelet[3004]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:21.992471 kubelet[3004]: I0117 12:01:21.991783 3004 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:01:23.567095 kubelet[3004]: I0117 12:01:23.567032 3004 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:01:23.567095 kubelet[3004]: I0117 12:01:23.567086 3004 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:01:23.567811 kubelet[3004]: I0117 12:01:23.567477 3004 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:01:23.605890 kubelet[3004]: I0117 12:01:23.605834 3004 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:01:23.606815 kubelet[3004]: E0117 12:01:23.606559 3004 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.620249 kubelet[3004]: I0117 12:01:23.620208 3004 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:01:23.621835 kubelet[3004]: I0117 12:01:23.621115 3004 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:01:23.621835 kubelet[3004]: I0117 12:01:23.621408 3004 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:01:23.621835 kubelet[3004]: I0117 12:01:23.621442 3004 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:01:23.621835 kubelet[3004]: I0117 12:01:23.621463 3004 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:01:23.621835 kubelet[3004]: I0117 12:01:23.621632 3004 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:23.626490 kubelet[3004]: I0117 12:01:23.626424 3004 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:01:23.626490 kubelet[3004]: I0117 12:01:23.626490 3004 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:01:23.627566 kubelet[3004]: I0117 12:01:23.626538 3004 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:01:23.627566 kubelet[3004]: I0117 12:01:23.626563 3004 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:01:23.630647 kubelet[3004]: W0117 12:01:23.629917 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.630647 kubelet[3004]: E0117 12:01:23.630013 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.630647 kubelet[3004]: W0117 12:01:23.630543 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-128&limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.630647 kubelet[3004]: E0117 12:01:23.630605 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-128&limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.632074 kubelet[3004]: I0117 12:01:23.631314 3004 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:01:23.632074 kubelet[3004]: I0117 12:01:23.631881 3004 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:01:23.632074 kubelet[3004]: W0117 12:01:23.631993 3004 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:01:23.634492 kubelet[3004]: I0117 12:01:23.634446 3004 server.go:1256] "Started kubelet" Jan 17 12:01:23.648104 kubelet[3004]: I0117 12:01:23.648045 3004 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:01:23.649139 kubelet[3004]: E0117 12:01:23.649073 3004 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.128:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.128:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-128.181b792596ee3e96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-128,UID:ip-172-31-23-128,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-128,},FirstTimestamp:2025-01-17 12:01:23.634405014 +0000 UTC m=+1.729134886,LastTimestamp:2025-01-17 12:01:23.634405014 +0000 UTC m=+1.729134886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-128,}" Jan 17 12:01:23.655247 kubelet[3004]: I0117 12:01:23.655199 3004 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:01:23.658215 kubelet[3004]: I0117 12:01:23.656538 3004 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:01:23.658215 kubelet[3004]: I0117 12:01:23.657015 3004 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:01:23.658215 kubelet[3004]: I0117 12:01:23.657098 3004 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:01:23.659841 kubelet[3004]: I0117 12:01:23.659740 3004 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:01:23.661881 kubelet[3004]: E0117 12:01:23.660824 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-128?timeout=10s\": dial tcp 172.31.23.128:6443: connect: connection refused" interval="200ms" Jan 17 12:01:23.661881 kubelet[3004]: I0117 12:01:23.661223 3004 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:01:23.662341 kubelet[3004]: I0117 12:01:23.662307 3004 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:01:23.663896 kubelet[3004]: I0117 12:01:23.663857 3004 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:01:23.664215 kubelet[3004]: I0117 12:01:23.664179 3004 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:01:23.670222 kubelet[3004]: W0117 12:01:23.670148 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.670470 kubelet[3004]: E0117 12:01:23.670443 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.671060 kubelet[3004]: I0117 12:01:23.671019 3004 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:01:23.687815 kubelet[3004]: E0117 12:01:23.687744 3004 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:01:23.705549 kubelet[3004]: I0117 12:01:23.705509 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:01:23.708061 kubelet[3004]: I0117 12:01:23.708024 3004 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:01:23.708255 kubelet[3004]: I0117 12:01:23.708236 3004 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:01:23.708369 kubelet[3004]: I0117 12:01:23.708351 3004 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:01:23.708525 kubelet[3004]: E0117 12:01:23.708506 3004 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:01:23.718041 kubelet[3004]: W0117 12:01:23.717980 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.718843 kubelet[3004]: E0117 12:01:23.718249 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:23.720295 kubelet[3004]: I0117 12:01:23.720168 3004 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:01:23.720295 kubelet[3004]: I0117 12:01:23.720204 3004 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:01:23.720295 kubelet[3004]: I0117 12:01:23.720232 3004 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:23.723310 kubelet[3004]: I0117 12:01:23.723154 3004 policy_none.go:49] "None policy: Start" Jan 17 12:01:23.724536 kubelet[3004]: I0117 12:01:23.724478 3004 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:01:23.724673 kubelet[3004]: I0117 12:01:23.724552 3004 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:01:23.733156 kubelet[3004]: I0117 12:01:23.733097 3004 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:01:23.733553 kubelet[3004]: I0117 12:01:23.733507 3004 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:01:23.744167 kubelet[3004]: E0117 12:01:23.744084 3004 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-128\" not found" Jan 17 12:01:23.762717 kubelet[3004]: I0117 12:01:23.762643 3004 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-128" Jan 17 12:01:23.763261 kubelet[3004]: E0117 12:01:23.763235 3004 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.128:6443/api/v1/nodes\": dial tcp 172.31.23.128:6443: connect: connection refused" node="ip-172-31-23-128" Jan 17 12:01:23.809366 kubelet[3004]: I0117 12:01:23.808962 3004 topology_manager.go:215] "Topology Admit Handler" podUID="7db4fb57c6fc65e6321f0992b0cb433f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-128" Jan 17 12:01:23.811230 kubelet[3004]: I0117 12:01:23.811198 3004 topology_manager.go:215] "Topology Admit Handler" podUID="cacb56ed31987d98462351f6191c9001" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:23.814850 kubelet[3004]: I0117 12:01:23.813488 3004 topology_manager.go:215] "Topology Admit Handler" podUID="d623a9531e068ab6d3be9134605ecc3e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-128" Jan 17 12:01:23.862354 kubelet[3004]: E0117 12:01:23.862227 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-128?timeout=10s\": dial tcp 172.31.23.128:6443: connect: connection refused" interval="400ms" Jan 17 12:01:23.864279 kubelet[3004]: I0117 12:01:23.864141 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db4fb57c6fc65e6321f0992b0cb433f-ca-certs\") pod \"kube-apiserver-ip-172-31-23-128\" (UID: \"7db4fb57c6fc65e6321f0992b0cb433f\") " pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:23.864386 kubelet[3004]: I0117 12:01:23.864329 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db4fb57c6fc65e6321f0992b0cb433f-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-128\" (UID: \"7db4fb57c6fc65e6321f0992b0cb433f\") " pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:23.864386 kubelet[3004]: I0117 12:01:23.864382 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db4fb57c6fc65e6321f0992b0cb433f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-128\" (UID: \"7db4fb57c6fc65e6321f0992b0cb433f\") " pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:23.864521 kubelet[3004]: I0117 12:01:23.864431 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:23.864521 kubelet[3004]: I0117 12:01:23.864481 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:23.864638 kubelet[3004]: I0117 12:01:23.864526 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:23.864638 kubelet[3004]: I0117 12:01:23.864574 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:23.864638 kubelet[3004]: I0117 12:01:23.864616 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:23.864836 kubelet[3004]: I0117 12:01:23.864661 3004 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d623a9531e068ab6d3be9134605ecc3e-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-128\" (UID: \"d623a9531e068ab6d3be9134605ecc3e\") " pod="kube-system/kube-scheduler-ip-172-31-23-128" Jan 17 12:01:23.966643 kubelet[3004]: I0117 12:01:23.966066 3004 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-128" Jan 17 12:01:23.966643 kubelet[3004]: E0117 12:01:23.966606 3004 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.128:6443/api/v1/nodes\": dial tcp 172.31.23.128:6443: connect: connection refused" node="ip-172-31-23-128" Jan 17 12:01:24.126530 containerd[2073]: time="2025-01-17T12:01:24.126344500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-128,Uid:7db4fb57c6fc65e6321f0992b0cb433f,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:24.127481 containerd[2073]: time="2025-01-17T12:01:24.127414480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-128,Uid:cacb56ed31987d98462351f6191c9001,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:24.130462 containerd[2073]: time="2025-01-17T12:01:24.129936256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-128,Uid:d623a9531e068ab6d3be9134605ecc3e,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:24.265006 kubelet[3004]: E0117 12:01:24.264946 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-128?timeout=10s\": dial tcp 172.31.23.128:6443: connect: connection refused" interval="800ms" Jan 17 12:01:24.369547 kubelet[3004]: I0117 12:01:24.368972 3004 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-128" Jan 17 12:01:24.369547 kubelet[3004]: E0117 12:01:24.369475 3004 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.128:6443/api/v1/nodes\": dial tcp 172.31.23.128:6443: connect: connection refused" node="ip-172-31-23-128" Jan 17 12:01:24.535916 kubelet[3004]: W0117 12:01:24.535812 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.535916 kubelet[3004]: E0117 12:01:24.535912 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.541817 kubelet[3004]: W0117 12:01:24.541703 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-128&limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.542541 kubelet[3004]: E0117 12:01:24.542461 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-128&limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.546212 kubelet[3004]: W0117 12:01:24.546118 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.546413 kubelet[3004]: E0117 12:01:24.546227 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.626653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227794219.mount: Deactivated successfully. Jan 17 12:01:24.631692 containerd[2073]: time="2025-01-17T12:01:24.631613382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:24.634609 containerd[2073]: time="2025-01-17T12:01:24.634384770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:01:24.635818 containerd[2073]: time="2025-01-17T12:01:24.635697834Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:24.637575 containerd[2073]: time="2025-01-17T12:01:24.637435398Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:24.638957 containerd[2073]: time="2025-01-17T12:01:24.638751006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:01:24.640371 containerd[2073]: time="2025-01-17T12:01:24.640205011Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:24.640371 containerd[2073]: time="2025-01-17T12:01:24.640306387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:01:24.649929 containerd[2073]: time="2025-01-17T12:01:24.649697059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:24.654033 containerd[2073]: time="2025-01-17T12:01:24.653573575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.029879ms" Jan 17 12:01:24.658787 containerd[2073]: time="2025-01-17T12:01:24.658663675Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.182351ms" Jan 17 12:01:24.659057 containerd[2073]: time="2025-01-17T12:01:24.659000047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.944571ms" Jan 17 12:01:24.790205 kubelet[3004]: W0117 12:01:24.790041 3004 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.790205 kubelet[3004]: E0117 12:01:24.790120 3004 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.128:6443: connect: connection refused Jan 17 12:01:24.850512 containerd[2073]: time="2025-01-17T12:01:24.849947852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:24.850512 containerd[2073]: time="2025-01-17T12:01:24.850074044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:24.850512 containerd[2073]: time="2025-01-17T12:01:24.850130204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:24.851658 containerd[2073]: time="2025-01-17T12:01:24.851435240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:24.852556 containerd[2073]: time="2025-01-17T12:01:24.852301964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:24.852556 containerd[2073]: time="2025-01-17T12:01:24.852450848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:24.852556 containerd[2073]: time="2025-01-17T12:01:24.852492620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:24.853091 containerd[2073]: time="2025-01-17T12:01:24.852699164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:24.863869 containerd[2073]: time="2025-01-17T12:01:24.863091176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:24.863869 containerd[2073]: time="2025-01-17T12:01:24.863199860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:24.863869 containerd[2073]: time="2025-01-17T12:01:24.863228540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:24.863869 containerd[2073]: time="2025-01-17T12:01:24.863410868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:25.036209 containerd[2073]: time="2025-01-17T12:01:25.036149968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-128,Uid:cacb56ed31987d98462351f6191c9001,Namespace:kube-system,Attempt:0,} returns sandbox id \"8821a08a45aa9986bf0d0f9f9540802dbfd840ef0b8c3410417149bbc79657e7\"" Jan 17 12:01:25.048434 containerd[2073]: time="2025-01-17T12:01:25.047283353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-128,Uid:7db4fb57c6fc65e6321f0992b0cb433f,Namespace:kube-system,Attempt:0,} returns sandbox id \"21ae2f6a11bae1bd196031bb39090ace154a7f4e7c27d279bc2d6de15438beba\"" Jan 17 12:01:25.051007 containerd[2073]: time="2025-01-17T12:01:25.050453801Z" level=info msg="CreateContainer within sandbox \"8821a08a45aa9986bf0d0f9f9540802dbfd840ef0b8c3410417149bbc79657e7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:01:25.057333 containerd[2073]: time="2025-01-17T12:01:25.056848385Z" level=info msg="CreateContainer within sandbox \"21ae2f6a11bae1bd196031bb39090ace154a7f4e7c27d279bc2d6de15438beba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:01:25.066512 kubelet[3004]: E0117 12:01:25.066454 3004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-128?timeout=10s\": dial tcp 172.31.23.128:6443: connect: connection refused" interval="1.6s" Jan 17 12:01:25.068366 containerd[2073]: time="2025-01-17T12:01:25.068282393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-128,Uid:d623a9531e068ab6d3be9134605ecc3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f81b0100e119e8b90b6b78d225498078cdcb0bd6d469f0902286fc677e5f9d8\"" Jan 17 12:01:25.075411 containerd[2073]: time="2025-01-17T12:01:25.075340001Z" level=info msg="CreateContainer within sandbox \"9f81b0100e119e8b90b6b78d225498078cdcb0bd6d469f0902286fc677e5f9d8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:01:25.092818 containerd[2073]: time="2025-01-17T12:01:25.092710937Z" level=info msg="CreateContainer within sandbox \"8821a08a45aa9986bf0d0f9f9540802dbfd840ef0b8c3410417149bbc79657e7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39\"" Jan 17 12:01:25.094732 containerd[2073]: time="2025-01-17T12:01:25.094621625Z" level=info msg="StartContainer for \"9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39\"" Jan 17 12:01:25.100291 containerd[2073]: time="2025-01-17T12:01:25.100018325Z" level=info msg="CreateContainer within sandbox \"21ae2f6a11bae1bd196031bb39090ace154a7f4e7c27d279bc2d6de15438beba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f2e12224356ae09949ac13b3fab26b52f36ac7decc2f667656fb8f69e2e80dc\"" Jan 17 12:01:25.102268 containerd[2073]: time="2025-01-17T12:01:25.102083597Z" level=info msg="StartContainer for \"7f2e12224356ae09949ac13b3fab26b52f36ac7decc2f667656fb8f69e2e80dc\"" Jan 17 12:01:25.111219 containerd[2073]: time="2025-01-17T12:01:25.110987441Z" level=info msg="CreateContainer within sandbox \"9f81b0100e119e8b90b6b78d225498078cdcb0bd6d469f0902286fc677e5f9d8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0\"" Jan 17 12:01:25.112611 containerd[2073]: time="2025-01-17T12:01:25.112530437Z" level=info msg="StartContainer for \"78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0\"" Jan 17 12:01:25.174960 kubelet[3004]: I0117 12:01:25.174903 3004 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-128" Jan 17 12:01:25.175949 kubelet[3004]: E0117 12:01:25.175891 3004 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.128:6443/api/v1/nodes\": dial tcp 172.31.23.128:6443: connect: connection refused" node="ip-172-31-23-128" Jan 17 12:01:25.310575 containerd[2073]: time="2025-01-17T12:01:25.309647010Z" level=info msg="StartContainer for \"9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39\" returns successfully" Jan 17 12:01:25.347223 containerd[2073]: time="2025-01-17T12:01:25.347137386Z" level=info msg="StartContainer for \"7f2e12224356ae09949ac13b3fab26b52f36ac7decc2f667656fb8f69e2e80dc\" returns successfully" Jan 17 12:01:25.390982 containerd[2073]: time="2025-01-17T12:01:25.390920754Z" level=info msg="StartContainer for \"78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0\" returns successfully" Jan 17 12:01:26.781267 kubelet[3004]: I0117 12:01:26.781215 3004 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-128" Jan 17 12:01:29.070858 kubelet[3004]: E0117 12:01:29.069927 3004 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-128\" not found" node="ip-172-31-23-128" Jan 17 12:01:29.097231 kubelet[3004]: I0117 12:01:29.097124 3004 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-128" Jan 17 12:01:29.210180 kubelet[3004]: E0117 12:01:29.210119 3004 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-128.181b792596ee3e96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-128,UID:ip-172-31-23-128,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-128,},FirstTimestamp:2025-01-17 12:01:23.634405014 +0000 UTC m=+1.729134886,LastTimestamp:2025-01-17 12:01:23.634405014 +0000 UTC m=+1.729134886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-128,}" Jan 17 12:01:29.289703 kubelet[3004]: E0117 12:01:29.289293 3004 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-128.181b79259a1bacca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-128,UID:ip-172-31-23-128,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-23-128,},FirstTimestamp:2025-01-17 12:01:23.687713994 +0000 UTC m=+1.782443890,LastTimestamp:2025-01-17 12:01:23.687713994 +0000 UTC m=+1.782443890,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-128,}" Jan 17 12:01:29.633015 kubelet[3004]: I0117 12:01:29.632942 3004 apiserver.go:52] "Watching apiserver" Jan 17 12:01:29.662238 kubelet[3004]: I0117 12:01:29.662153 3004 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:01:31.538878 update_engine[2045]: I20250117 12:01:31.537817 2045 update_attempter.cc:509] Updating boot flags... Jan 17 12:01:31.697957 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3288) Jan 17 12:01:32.173498 systemd[1]: Reloading requested from client PID 3373 ('systemctl') (unit session-7.scope)... Jan 17 12:01:32.173524 systemd[1]: Reloading... Jan 17 12:01:32.410962 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3291) Jan 17 12:01:32.642573 zram_generator::config[3448]: No configuration found. Jan 17 12:01:33.147470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:33.394324 systemd[1]: Reloading finished in 1219 ms. Jan 17 12:01:33.513402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:33.513874 kubelet[3004]: I0117 12:01:33.513546 3004 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:01:33.547351 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:01:33.550095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:33.568464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:33.865091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:33.876514 (kubelet)[3567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:01:34.008795 kubelet[3567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:34.008795 kubelet[3567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:01:34.008795 kubelet[3567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:34.008795 kubelet[3567]: I0117 12:01:34.008283 3567 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:01:34.017981 kubelet[3567]: I0117 12:01:34.017937 3567 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:01:34.018508 kubelet[3567]: I0117 12:01:34.018247 3567 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:01:34.018908 kubelet[3567]: I0117 12:01:34.018748 3567 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:01:34.022814 kubelet[3567]: I0117 12:01:34.022038 3567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:01:34.026109 kubelet[3567]: I0117 12:01:34.026069 3567 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:01:34.037608 kubelet[3567]: I0117 12:01:34.037548 3567 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:01:34.038800 kubelet[3567]: I0117 12:01:34.038741 3567 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:01:34.039211 kubelet[3567]: I0117 12:01:34.039186 3567 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:01:34.039428 kubelet[3567]: I0117 12:01:34.039406 3567 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:01:34.039812 kubelet[3567]: I0117 12:01:34.039523 3567 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:01:34.039812 kubelet[3567]: I0117 12:01:34.039594 3567 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:34.040042 kubelet[3567]: I0117 12:01:34.040007 3567 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:01:34.041038 kubelet[3567]: I0117 12:01:34.040871 3567 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:01:34.041038 kubelet[3567]: I0117 12:01:34.040929 3567 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:01:34.041038 kubelet[3567]: I0117 12:01:34.040952 3567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:01:34.047364 kubelet[3567]: I0117 12:01:34.047283 3567 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:01:34.049590 kubelet[3567]: I0117 12:01:34.048078 3567 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:01:34.050753 kubelet[3567]: I0117 12:01:34.050536 3567 server.go:1256] "Started kubelet" Jan 17 12:01:34.066965 kubelet[3567]: I0117 12:01:34.066899 3567 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:01:34.068814 kubelet[3567]: I0117 12:01:34.068707 3567 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:01:34.069280 kubelet[3567]: I0117 12:01:34.069229 3567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:01:34.071432 kubelet[3567]: I0117 12:01:34.071306 3567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:01:34.078954 kubelet[3567]: I0117 12:01:34.078890 3567 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:01:34.080025 kubelet[3567]: I0117 12:01:34.079975 3567 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:01:34.080337 kubelet[3567]: I0117 12:01:34.080302 3567 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:01:34.083536 kubelet[3567]: I0117 12:01:34.083486 3567 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:01:34.111423 kubelet[3567]: I0117 12:01:34.111226 3567 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:01:34.111423 kubelet[3567]: I0117 12:01:34.111374 3567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:01:34.169496 kubelet[3567]: I0117 12:01:34.165589 3567 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:01:34.173744 kubelet[3567]: E0117 12:01:34.173686 3567 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:01:34.175374 kubelet[3567]: I0117 12:01:34.174914 3567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:01:34.193421 kubelet[3567]: I0117 12:01:34.193363 3567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:01:34.193421 kubelet[3567]: I0117 12:01:34.193407 3567 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:01:34.193585 kubelet[3567]: I0117 12:01:34.193441 3567 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:01:34.193585 kubelet[3567]: E0117 12:01:34.193520 3567 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:01:34.194470 kubelet[3567]: E0117 12:01:34.193954 3567 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 17 12:01:34.204387 kubelet[3567]: I0117 12:01:34.202944 3567 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-128" Jan 17 12:01:34.215751 kubelet[3567]: I0117 12:01:34.215701 3567 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-128" Jan 17 12:01:34.215897 kubelet[3567]: I0117 12:01:34.215870 3567 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-128" Jan 17 12:01:34.298519 kubelet[3567]: E0117 12:01:34.298403 3567 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:01:34.391898 kubelet[3567]: I0117 12:01:34.391817 3567 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:01:34.391898 kubelet[3567]: I0117 12:01:34.391855 3567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:01:34.393673 kubelet[3567]: I0117 12:01:34.392137 3567 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:34.393673 kubelet[3567]: I0117 12:01:34.392458 3567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:01:34.393673 kubelet[3567]: I0117 12:01:34.392502 3567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:01:34.393673 kubelet[3567]: I0117 12:01:34.392521 3567 policy_none.go:49] "None policy: Start" Jan 17 12:01:34.398270 kubelet[3567]: I0117 12:01:34.397517 3567 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:01:34.398270 kubelet[3567]: I0117 12:01:34.397580 3567 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:01:34.398270 kubelet[3567]: I0117 12:01:34.397941 3567 state_mem.go:75] "Updated machine memory state" Jan 17 12:01:34.406070 kubelet[3567]: I0117 12:01:34.404964 3567 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:01:34.413068 kubelet[3567]: I0117 12:01:34.412959 3567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:01:34.500054 kubelet[3567]: I0117 12:01:34.498695 3567 topology_manager.go:215] "Topology Admit Handler" podUID="d623a9531e068ab6d3be9134605ecc3e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-128" Jan 17 12:01:34.501703 kubelet[3567]: I0117 12:01:34.501538 3567 topology_manager.go:215] "Topology Admit Handler" podUID="7db4fb57c6fc65e6321f0992b0cb433f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-128" Jan 17 12:01:34.502236 kubelet[3567]: I0117 12:01:34.502206 3567 topology_manager.go:215] "Topology Admit Handler" podUID="cacb56ed31987d98462351f6191c9001" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:34.521603 kubelet[3567]: E0117 12:01:34.521180 3567 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-128\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:34.524058 kubelet[3567]: E0117 12:01:34.524004 3567 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-128\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-128" Jan 17 12:01:34.524921 kubelet[3567]: E0117 12:01:34.524591 3567 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-128\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:34.585197 kubelet[3567]: I0117 12:01:34.585144 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db4fb57c6fc65e6321f0992b0cb433f-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-128\" (UID: \"7db4fb57c6fc65e6321f0992b0cb433f\") " pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:34.585355 kubelet[3567]: I0117 12:01:34.585225 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db4fb57c6fc65e6321f0992b0cb433f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-128\" (UID: \"7db4fb57c6fc65e6321f0992b0cb433f\") " pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:34.585355 kubelet[3567]: I0117 12:01:34.585275 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:34.585355 kubelet[3567]: I0117 12:01:34.585322 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:34.585519 kubelet[3567]: I0117 12:01:34.585367 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d623a9531e068ab6d3be9134605ecc3e-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-128\" (UID: \"d623a9531e068ab6d3be9134605ecc3e\") " pod="kube-system/kube-scheduler-ip-172-31-23-128" Jan 17 12:01:34.585519 kubelet[3567]: I0117 12:01:34.585411 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db4fb57c6fc65e6321f0992b0cb433f-ca-certs\") pod \"kube-apiserver-ip-172-31-23-128\" (UID: \"7db4fb57c6fc65e6321f0992b0cb433f\") " pod="kube-system/kube-apiserver-ip-172-31-23-128" Jan 17 12:01:34.585519 kubelet[3567]: I0117 12:01:34.585452 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:34.585519 kubelet[3567]: I0117 12:01:34.585495 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:34.585711 kubelet[3567]: I0117 12:01:34.585569 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacb56ed31987d98462351f6191c9001-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-128\" (UID: \"cacb56ed31987d98462351f6191c9001\") " pod="kube-system/kube-controller-manager-ip-172-31-23-128" Jan 17 12:01:35.045348 kubelet[3567]: I0117 12:01:35.045254 3567 apiserver.go:52] "Watching apiserver" Jan 17 12:01:35.080953 kubelet[3567]: I0117 12:01:35.080902 3567 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:01:35.331144 kubelet[3567]: I0117 12:01:35.330963 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-128" podStartSLOduration=2.330896548 podStartE2EDuration="2.330896548s" podCreationTimestamp="2025-01-17 12:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:35.314988484 +0000 UTC m=+1.428651033" watchObservedRunningTime="2025-01-17 12:01:35.330896548 +0000 UTC m=+1.444559109" Jan 17 12:01:35.346660 kubelet[3567]: I0117 12:01:35.346000 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-128" podStartSLOduration=4.345937324 podStartE2EDuration="4.345937324s" podCreationTimestamp="2025-01-17 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:35.331226152 +0000 UTC m=+1.444888689" watchObservedRunningTime="2025-01-17 12:01:35.345937324 +0000 UTC m=+1.459599873" Jan 17 12:01:35.392657 kubelet[3567]: I0117 12:01:35.387075 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-128" podStartSLOduration=4.386949976 podStartE2EDuration="4.386949976s" podCreationTimestamp="2025-01-17 12:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:35.346452772 +0000 UTC m=+1.460115357" watchObservedRunningTime="2025-01-17 12:01:35.386949976 +0000 UTC m=+1.500612537" Jan 17 12:01:39.559581 sudo[2425]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:39.583897 sshd[2421]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:39.592511 systemd[1]: sshd@6-172.31.23.128:22-139.178.68.195:35180.service: Deactivated successfully. Jan 17 12:01:39.593417 systemd-logind[2042]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:01:39.602586 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:01:39.605353 systemd-logind[2042]: Removed session 7. Jan 17 12:01:48.641214 kubelet[3567]: I0117 12:01:48.641163 3567 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:01:48.644000 containerd[2073]: time="2025-01-17T12:01:48.643401414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:01:48.645789 kubelet[3567]: I0117 12:01:48.644478 3567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:01:48.662093 kubelet[3567]: I0117 12:01:48.662013 3567 topology_manager.go:215] "Topology Admit Handler" podUID="5f6bbb73-528a-4a94-9e52-432adddce3c3" podNamespace="kube-system" podName="kube-proxy-4lvz2" Jan 17 12:01:48.683844 kubelet[3567]: I0117 12:01:48.683040 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f6bbb73-528a-4a94-9e52-432adddce3c3-xtables-lock\") pod \"kube-proxy-4lvz2\" (UID: \"5f6bbb73-528a-4a94-9e52-432adddce3c3\") " pod="kube-system/kube-proxy-4lvz2" Jan 17 12:01:48.683844 kubelet[3567]: I0117 12:01:48.683240 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f6bbb73-528a-4a94-9e52-432adddce3c3-kube-proxy\") pod \"kube-proxy-4lvz2\" (UID: \"5f6bbb73-528a-4a94-9e52-432adddce3c3\") " pod="kube-system/kube-proxy-4lvz2" Jan 17 12:01:48.785807 kubelet[3567]: I0117 12:01:48.784957 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f6bbb73-528a-4a94-9e52-432adddce3c3-lib-modules\") pod \"kube-proxy-4lvz2\" (UID: \"5f6bbb73-528a-4a94-9e52-432adddce3c3\") " pod="kube-system/kube-proxy-4lvz2" Jan 17 12:01:48.785807 kubelet[3567]: I0117 12:01:48.785097 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fwt\" (UniqueName: \"kubernetes.io/projected/5f6bbb73-528a-4a94-9e52-432adddce3c3-kube-api-access-52fwt\") pod \"kube-proxy-4lvz2\" (UID: \"5f6bbb73-528a-4a94-9e52-432adddce3c3\") " pod="kube-system/kube-proxy-4lvz2" Jan 17 12:01:48.945923 kubelet[3567]: I0117 12:01:48.945045 3567 topology_manager.go:215] "Topology Admit Handler" podUID="eaef7ed1-71db-4288-9229-9d37e8c874e5" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-lvsxq" Jan 17 12:01:48.988689 kubelet[3567]: I0117 12:01:48.986840 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eaef7ed1-71db-4288-9229-9d37e8c874e5-var-lib-calico\") pod \"tigera-operator-c7ccbd65-lvsxq\" (UID: \"eaef7ed1-71db-4288-9229-9d37e8c874e5\") " pod="tigera-operator/tigera-operator-c7ccbd65-lvsxq" Jan 17 12:01:48.988689 kubelet[3567]: I0117 12:01:48.987702 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9k7j\" (UniqueName: \"kubernetes.io/projected/eaef7ed1-71db-4288-9229-9d37e8c874e5-kube-api-access-k9k7j\") pod \"tigera-operator-c7ccbd65-lvsxq\" (UID: \"eaef7ed1-71db-4288-9229-9d37e8c874e5\") " pod="tigera-operator/tigera-operator-c7ccbd65-lvsxq" Jan 17 12:01:48.997423 containerd[2073]: time="2025-01-17T12:01:48.996170827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4lvz2,Uid:5f6bbb73-528a-4a94-9e52-432adddce3c3,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:49.069340 containerd[2073]: time="2025-01-17T12:01:49.069046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:49.070277 containerd[2073]: time="2025-01-17T12:01:49.069152320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:49.070277 containerd[2073]: time="2025-01-17T12:01:49.069968584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:49.072010 containerd[2073]: time="2025-01-17T12:01:49.071912824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:49.153468 containerd[2073]: time="2025-01-17T12:01:49.153354916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4lvz2,Uid:5f6bbb73-528a-4a94-9e52-432adddce3c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"572244277b6baaa7a0f4ab5d326e653ea471c49ce21c8dfd2eb1637ae490e916\"" Jan 17 12:01:49.164907 containerd[2073]: time="2025-01-17T12:01:49.164157412Z" level=info msg="CreateContainer within sandbox \"572244277b6baaa7a0f4ab5d326e653ea471c49ce21c8dfd2eb1637ae490e916\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:01:49.183835 containerd[2073]: time="2025-01-17T12:01:49.183714856Z" level=info msg="CreateContainer within sandbox \"572244277b6baaa7a0f4ab5d326e653ea471c49ce21c8dfd2eb1637ae490e916\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0691dc5bc99ce25efb9c908bf4d9f469f6ac5ba7a7eccc6d18636805a60d5acf\"" Jan 17 12:01:49.186507 containerd[2073]: time="2025-01-17T12:01:49.185100112Z" level=info msg="StartContainer for \"0691dc5bc99ce25efb9c908bf4d9f469f6ac5ba7a7eccc6d18636805a60d5acf\"" Jan 17 12:01:49.267470 containerd[2073]: time="2025-01-17T12:01:49.267330041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-lvsxq,Uid:eaef7ed1-71db-4288-9229-9d37e8c874e5,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:01:49.294574 containerd[2073]: time="2025-01-17T12:01:49.294465233Z" level=info msg="StartContainer for \"0691dc5bc99ce25efb9c908bf4d9f469f6ac5ba7a7eccc6d18636805a60d5acf\" returns successfully" Jan 17 12:01:49.342866 containerd[2073]: time="2025-01-17T12:01:49.341437829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:49.342866 containerd[2073]: time="2025-01-17T12:01:49.341777261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:49.342866 containerd[2073]: time="2025-01-17T12:01:49.341815193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:49.342866 containerd[2073]: time="2025-01-17T12:01:49.342335477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:49.366924 kubelet[3567]: I0117 12:01:49.365825 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4lvz2" podStartSLOduration=1.365745965 podStartE2EDuration="1.365745965s" podCreationTimestamp="2025-01-17 12:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:49.364705085 +0000 UTC m=+15.478367658" watchObservedRunningTime="2025-01-17 12:01:49.365745965 +0000 UTC m=+15.479408526" Jan 17 12:01:49.475685 containerd[2073]: time="2025-01-17T12:01:49.475343142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-lvsxq,Uid:eaef7ed1-71db-4288-9229-9d37e8c874e5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1559873c9eb486440c7ce38a914361044bb5692ddb4fa204bcc645ede33398fe\"" Jan 17 12:01:49.484337 containerd[2073]: time="2025-01-17T12:01:49.484132710Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:01:54.431030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715690337.mount: Deactivated successfully. Jan 17 12:01:55.068856 containerd[2073]: time="2025-01-17T12:01:55.068743990Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:55.070789 containerd[2073]: time="2025-01-17T12:01:55.070710970Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125964" Jan 17 12:01:55.073675 containerd[2073]: time="2025-01-17T12:01:55.073573954Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:55.086069 containerd[2073]: time="2025-01-17T12:01:55.085800118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:55.089450 containerd[2073]: time="2025-01-17T12:01:55.089352538Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 5.605104532s" Jan 17 12:01:55.089450 containerd[2073]: time="2025-01-17T12:01:55.089442874Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:01:55.093240 containerd[2073]: time="2025-01-17T12:01:55.093065086Z" level=info msg="CreateContainer within sandbox \"1559873c9eb486440c7ce38a914361044bb5692ddb4fa204bcc645ede33398fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:01:55.122252 containerd[2073]: time="2025-01-17T12:01:55.122117002Z" level=info msg="CreateContainer within sandbox \"1559873c9eb486440c7ce38a914361044bb5692ddb4fa204bcc645ede33398fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a\"" Jan 17 12:01:55.123618 containerd[2073]: time="2025-01-17T12:01:55.123547582Z" level=info msg="StartContainer for \"f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a\"" Jan 17 12:01:55.225675 containerd[2073]: time="2025-01-17T12:01:55.225575830Z" level=info msg="StartContainer for \"f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a\" returns successfully" Jan 17 12:02:00.336816 kubelet[3567]: I0117 12:02:00.332052 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-lvsxq" podStartSLOduration=6.721380012 podStartE2EDuration="12.331984648s" podCreationTimestamp="2025-01-17 12:01:48 +0000 UTC" firstStartedPulling="2025-01-17 12:01:49.479323194 +0000 UTC m=+15.592985743" lastFinishedPulling="2025-01-17 12:01:55.08992783 +0000 UTC m=+21.203590379" observedRunningTime="2025-01-17 12:01:55.383237627 +0000 UTC m=+21.496900164" watchObservedRunningTime="2025-01-17 12:02:00.331984648 +0000 UTC m=+26.445647329" Jan 17 12:02:00.336816 kubelet[3567]: I0117 12:02:00.332249 3567 topology_manager.go:215] "Topology Admit Handler" podUID="b0891d59-6798-49fe-8d90-7dfed41f5d9a" podNamespace="calico-system" podName="calico-typha-6b7f75cd6c-jfjv9" Jan 17 12:02:00.470899 kubelet[3567]: I0117 12:02:00.470653 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42h2\" (UniqueName: \"kubernetes.io/projected/b0891d59-6798-49fe-8d90-7dfed41f5d9a-kube-api-access-t42h2\") pod \"calico-typha-6b7f75cd6c-jfjv9\" (UID: \"b0891d59-6798-49fe-8d90-7dfed41f5d9a\") " pod="calico-system/calico-typha-6b7f75cd6c-jfjv9" Jan 17 12:02:00.471918 kubelet[3567]: I0117 12:02:00.471831 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b0891d59-6798-49fe-8d90-7dfed41f5d9a-typha-certs\") pod \"calico-typha-6b7f75cd6c-jfjv9\" (UID: \"b0891d59-6798-49fe-8d90-7dfed41f5d9a\") " pod="calico-system/calico-typha-6b7f75cd6c-jfjv9" Jan 17 12:02:00.472952 kubelet[3567]: I0117 12:02:00.472914 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0891d59-6798-49fe-8d90-7dfed41f5d9a-tigera-ca-bundle\") pod \"calico-typha-6b7f75cd6c-jfjv9\" (UID: \"b0891d59-6798-49fe-8d90-7dfed41f5d9a\") " pod="calico-system/calico-typha-6b7f75cd6c-jfjv9" Jan 17 12:02:00.709406 kubelet[3567]: I0117 12:02:00.709250 3567 topology_manager.go:215] "Topology Admit Handler" podUID="71e58cd2-1a4e-4c22-b703-e9e1204ddb70" podNamespace="calico-system" podName="calico-node-wt5nn" Jan 17 12:02:00.877204 kubelet[3567]: I0117 12:02:00.877134 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-cni-bin-dir\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877382 kubelet[3567]: I0117 12:02:00.877223 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-xtables-lock\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877382 kubelet[3567]: I0117 12:02:00.877273 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-cni-net-dir\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877382 kubelet[3567]: I0117 12:02:00.877340 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-cni-log-dir\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877555 kubelet[3567]: I0117 12:02:00.877390 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-lib-modules\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877555 kubelet[3567]: I0117 12:02:00.877438 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-var-lib-calico\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877555 kubelet[3567]: I0117 12:02:00.877486 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-flexvol-driver-host\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877555 kubelet[3567]: I0117 12:02:00.877532 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bwnc\" (UniqueName: \"kubernetes.io/projected/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-kube-api-access-2bwnc\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877801 kubelet[3567]: I0117 12:02:00.877578 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-tigera-ca-bundle\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877801 kubelet[3567]: I0117 12:02:00.877621 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-var-run-calico\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877801 kubelet[3567]: I0117 12:02:00.877668 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-node-certs\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.877801 kubelet[3567]: I0117 12:02:00.877715 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/71e58cd2-1a4e-4c22-b703-e9e1204ddb70-policysync\") pod \"calico-node-wt5nn\" (UID: \"71e58cd2-1a4e-4c22-b703-e9e1204ddb70\") " pod="calico-system/calico-node-wt5nn" Jan 17 12:02:00.959080 containerd[2073]: time="2025-01-17T12:02:00.958579135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b7f75cd6c-jfjv9,Uid:b0891d59-6798-49fe-8d90-7dfed41f5d9a,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:01.025739 kubelet[3567]: E0117 12:02:01.024022 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.025739 kubelet[3567]: W0117 12:02:01.024064 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.025739 kubelet[3567]: E0117 12:02:01.024104 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.038911 kubelet[3567]: I0117 12:02:01.034886 3567 topology_manager.go:215] "Topology Admit Handler" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" podNamespace="calico-system" podName="csi-node-driver-q7b84" Jan 17 12:02:01.045280 kubelet[3567]: E0117 12:02:01.042434 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:01.082175 kubelet[3567]: E0117 12:02:01.080869 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.082175 kubelet[3567]: W0117 12:02:01.080925 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.082175 kubelet[3567]: E0117 12:02:01.080965 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.090419 kubelet[3567]: E0117 12:02:01.090238 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.090419 kubelet[3567]: W0117 12:02:01.090280 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.090419 kubelet[3567]: E0117 12:02:01.090333 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.107002 kubelet[3567]: E0117 12:02:01.105898 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.107002 kubelet[3567]: W0117 12:02:01.105942 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.110095 kubelet[3567]: E0117 12:02:01.107960 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.110095 kubelet[3567]: W0117 12:02:01.107998 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.110095 kubelet[3567]: E0117 12:02:01.108038 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.110095 kubelet[3567]: E0117 12:02:01.108085 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.112306 kubelet[3567]: E0117 12:02:01.112141 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.112306 kubelet[3567]: W0117 12:02:01.112176 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.112306 kubelet[3567]: E0117 12:02:01.112214 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.116031 kubelet[3567]: E0117 12:02:01.115857 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.116031 kubelet[3567]: W0117 12:02:01.115896 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.116031 kubelet[3567]: E0117 12:02:01.116023 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.118176 kubelet[3567]: E0117 12:02:01.118130 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.118176 kubelet[3567]: W0117 12:02:01.118171 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.118953 kubelet[3567]: E0117 12:02:01.118209 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.121226 kubelet[3567]: E0117 12:02:01.120924 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.121226 kubelet[3567]: W0117 12:02:01.120962 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.121226 kubelet[3567]: E0117 12:02:01.121000 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.125285 kubelet[3567]: E0117 12:02:01.124709 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.125285 kubelet[3567]: W0117 12:02:01.124746 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.125285 kubelet[3567]: E0117 12:02:01.124856 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.126986 kubelet[3567]: E0117 12:02:01.125590 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.126986 kubelet[3567]: W0117 12:02:01.125625 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.126986 kubelet[3567]: E0117 12:02:01.125663 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.132880 kubelet[3567]: E0117 12:02:01.132023 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.132880 kubelet[3567]: W0117 12:02:01.132062 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.132880 kubelet[3567]: E0117 12:02:01.132106 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.135117 kubelet[3567]: E0117 12:02:01.134714 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.135117 kubelet[3567]: W0117 12:02:01.134751 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.135117 kubelet[3567]: E0117 12:02:01.134878 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.137234 containerd[2073]: time="2025-01-17T12:02:01.135027280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:01.141201 containerd[2073]: time="2025-01-17T12:02:01.138286840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:01.141201 containerd[2073]: time="2025-01-17T12:02:01.138364384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:01.141201 containerd[2073]: time="2025-01-17T12:02:01.138591460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:01.141454 kubelet[3567]: E0117 12:02:01.140956 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.141454 kubelet[3567]: W0117 12:02:01.140986 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.141454 kubelet[3567]: E0117 12:02:01.141027 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.142651 kubelet[3567]: E0117 12:02:01.142160 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.142651 kubelet[3567]: W0117 12:02:01.142196 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.142651 kubelet[3567]: E0117 12:02:01.142531 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.145948 kubelet[3567]: E0117 12:02:01.145911 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.146334 kubelet[3567]: W0117 12:02:01.146099 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.146334 kubelet[3567]: E0117 12:02:01.146142 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.148698 kubelet[3567]: E0117 12:02:01.148541 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.148698 kubelet[3567]: W0117 12:02:01.148576 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.148698 kubelet[3567]: E0117 12:02:01.148612 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.152404 kubelet[3567]: E0117 12:02:01.151328 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.152404 kubelet[3567]: W0117 12:02:01.151364 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.152404 kubelet[3567]: E0117 12:02:01.151401 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.155747 kubelet[3567]: E0117 12:02:01.154339 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.155747 kubelet[3567]: W0117 12:02:01.154374 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.155747 kubelet[3567]: E0117 12:02:01.154410 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.159795 kubelet[3567]: E0117 12:02:01.158011 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.159795 kubelet[3567]: W0117 12:02:01.158045 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.159795 kubelet[3567]: E0117 12:02:01.158083 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.162813 kubelet[3567]: E0117 12:02:01.160216 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.162813 kubelet[3567]: W0117 12:02:01.160248 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.162813 kubelet[3567]: E0117 12:02:01.160283 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.164958 kubelet[3567]: E0117 12:02:01.164890 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.164958 kubelet[3567]: W0117 12:02:01.164945 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.165203 kubelet[3567]: E0117 12:02:01.164984 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.166588 kubelet[3567]: E0117 12:02:01.166538 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.166588 kubelet[3567]: W0117 12:02:01.166576 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.167830 kubelet[3567]: E0117 12:02:01.166616 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.217458 kubelet[3567]: E0117 12:02:01.217405 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.218465 kubelet[3567]: W0117 12:02:01.217451 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.218465 kubelet[3567]: E0117 12:02:01.217512 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.218465 kubelet[3567]: I0117 12:02:01.217582 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4460c615-c8f8-4ea0-acd3-2a02aa651b6c-varrun\") pod \"csi-node-driver-q7b84\" (UID: \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\") " pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:01.221039 kubelet[3567]: E0117 12:02:01.220675 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.221039 kubelet[3567]: W0117 12:02:01.220742 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.221039 kubelet[3567]: E0117 12:02:01.220833 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.222213 kubelet[3567]: I0117 12:02:01.221203 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4460c615-c8f8-4ea0-acd3-2a02aa651b6c-socket-dir\") pod \"csi-node-driver-q7b84\" (UID: \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\") " pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:01.222213 kubelet[3567]: E0117 12:02:01.221579 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.222213 kubelet[3567]: W0117 12:02:01.221600 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.222213 kubelet[3567]: E0117 12:02:01.221651 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.222524 kubelet[3567]: E0117 12:02:01.222405 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.222524 kubelet[3567]: W0117 12:02:01.222435 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.223819 kubelet[3567]: E0117 12:02:01.222529 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.224811 kubelet[3567]: E0117 12:02:01.223645 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.224811 kubelet[3567]: W0117 12:02:01.224242 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.224811 kubelet[3567]: E0117 12:02:01.224320 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.224811 kubelet[3567]: I0117 12:02:01.224385 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxts9\" (UniqueName: \"kubernetes.io/projected/4460c615-c8f8-4ea0-acd3-2a02aa651b6c-kube-api-access-vxts9\") pod \"csi-node-driver-q7b84\" (UID: \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\") " pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:01.226960 kubelet[3567]: E0117 12:02:01.225681 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.226960 kubelet[3567]: W0117 12:02:01.225720 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.226960 kubelet[3567]: E0117 12:02:01.225867 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.226960 kubelet[3567]: I0117 12:02:01.225944 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4460c615-c8f8-4ea0-acd3-2a02aa651b6c-kubelet-dir\") pod \"csi-node-driver-q7b84\" (UID: \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\") " pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:01.226960 kubelet[3567]: E0117 12:02:01.226545 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.226960 kubelet[3567]: W0117 12:02:01.226568 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.226960 kubelet[3567]: E0117 12:02:01.226937 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.228140 kubelet[3567]: E0117 12:02:01.228002 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.228140 kubelet[3567]: W0117 12:02:01.228038 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.228723 kubelet[3567]: E0117 12:02:01.228351 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.229254 kubelet[3567]: E0117 12:02:01.229206 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.229655 kubelet[3567]: W0117 12:02:01.229241 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.232954 kubelet[3567]: E0117 12:02:01.231898 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.232954 kubelet[3567]: I0117 12:02:01.231969 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4460c615-c8f8-4ea0-acd3-2a02aa651b6c-registration-dir\") pod \"csi-node-driver-q7b84\" (UID: \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\") " pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:01.234239 kubelet[3567]: E0117 12:02:01.234187 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.234239 kubelet[3567]: W0117 12:02:01.234226 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.234239 kubelet[3567]: E0117 12:02:01.234505 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.237614 kubelet[3567]: E0117 12:02:01.237082 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.237614 kubelet[3567]: W0117 12:02:01.237123 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.237614 kubelet[3567]: E0117 12:02:01.237162 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.243624 kubelet[3567]: E0117 12:02:01.242933 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.243779 kubelet[3567]: W0117 12:02:01.243615 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.243779 kubelet[3567]: E0117 12:02:01.243671 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.246385 kubelet[3567]: E0117 12:02:01.246234 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.246385 kubelet[3567]: W0117 12:02:01.246271 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.246385 kubelet[3567]: E0117 12:02:01.246310 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.248847 kubelet[3567]: E0117 12:02:01.247934 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.248847 kubelet[3567]: W0117 12:02:01.247973 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.248847 kubelet[3567]: E0117 12:02:01.248011 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.251403 kubelet[3567]: E0117 12:02:01.251063 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.251403 kubelet[3567]: W0117 12:02:01.251096 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.251403 kubelet[3567]: E0117 12:02:01.251133 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.337813 containerd[2073]: time="2025-01-17T12:02:01.334078853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wt5nn,Uid:71e58cd2-1a4e-4c22-b703-e9e1204ddb70,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:01.354410 kubelet[3567]: E0117 12:02:01.354189 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.354410 kubelet[3567]: W0117 12:02:01.354234 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.354410 kubelet[3567]: E0117 12:02:01.354274 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.356880 kubelet[3567]: E0117 12:02:01.354883 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.356880 kubelet[3567]: W0117 12:02:01.354911 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.356880 kubelet[3567]: E0117 12:02:01.354945 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.356880 kubelet[3567]: E0117 12:02:01.355364 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.356880 kubelet[3567]: W0117 12:02:01.355441 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.356880 kubelet[3567]: E0117 12:02:01.355529 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.356880 kubelet[3567]: E0117 12:02:01.356393 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.356880 kubelet[3567]: W0117 12:02:01.356447 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.356880 kubelet[3567]: E0117 12:02:01.356489 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.357469 kubelet[3567]: E0117 12:02:01.357149 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.357469 kubelet[3567]: W0117 12:02:01.357202 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.357469 kubelet[3567]: E0117 12:02:01.357241 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.357982 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.362027 kubelet[3567]: W0117 12:02:01.358021 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.358056 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.358971 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.362027 kubelet[3567]: W0117 12:02:01.358999 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.359034 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.359561 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.362027 kubelet[3567]: W0117 12:02:01.359584 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.359613 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.362027 kubelet[3567]: E0117 12:02:01.361171 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.367285 kubelet[3567]: W0117 12:02:01.361200 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.367285 kubelet[3567]: E0117 12:02:01.361235 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.367285 kubelet[3567]: E0117 12:02:01.366159 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.367285 kubelet[3567]: W0117 12:02:01.366189 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.367285 kubelet[3567]: E0117 12:02:01.366224 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.367729 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.370749 kubelet[3567]: W0117 12:02:01.367757 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.367831 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.368908 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.370749 kubelet[3567]: W0117 12:02:01.369025 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.369075 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.370074 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.370749 kubelet[3567]: W0117 12:02:01.370103 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.370156 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.370749 kubelet[3567]: E0117 12:02:01.370696 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.372802 kubelet[3567]: W0117 12:02:01.370726 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.372802 kubelet[3567]: E0117 12:02:01.370757 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.372802 kubelet[3567]: E0117 12:02:01.373931 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.372802 kubelet[3567]: W0117 12:02:01.373957 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.372802 kubelet[3567]: E0117 12:02:01.373991 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.375755 kubelet[3567]: E0117 12:02:01.375514 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.375755 kubelet[3567]: W0117 12:02:01.375540 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.375755 kubelet[3567]: E0117 12:02:01.375629 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.381823 kubelet[3567]: E0117 12:02:01.380894 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.381823 kubelet[3567]: W0117 12:02:01.380959 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.381823 kubelet[3567]: E0117 12:02:01.381188 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.385429 kubelet[3567]: E0117 12:02:01.384120 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.385429 kubelet[3567]: W0117 12:02:01.384156 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.385429 kubelet[3567]: E0117 12:02:01.384258 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.385429 kubelet[3567]: E0117 12:02:01.385170 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.385429 kubelet[3567]: W0117 12:02:01.385288 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.385429 kubelet[3567]: E0117 12:02:01.385394 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.387179 kubelet[3567]: E0117 12:02:01.386015 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.387179 kubelet[3567]: W0117 12:02:01.386063 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.387179 kubelet[3567]: E0117 12:02:01.386113 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.392030 kubelet[3567]: E0117 12:02:01.391640 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.392030 kubelet[3567]: W0117 12:02:01.391688 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.392030 kubelet[3567]: E0117 12:02:01.391738 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.395087 kubelet[3567]: E0117 12:02:01.394719 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.395087 kubelet[3567]: W0117 12:02:01.394791 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.395087 kubelet[3567]: E0117 12:02:01.394998 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.396597 kubelet[3567]: E0117 12:02:01.396466 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.396597 kubelet[3567]: W0117 12:02:01.396520 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.396597 kubelet[3567]: E0117 12:02:01.396617 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.397735 kubelet[3567]: E0117 12:02:01.397345 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.397735 kubelet[3567]: W0117 12:02:01.397489 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.397735 kubelet[3567]: E0117 12:02:01.397639 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.400064 kubelet[3567]: E0117 12:02:01.399366 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.400821 kubelet[3567]: W0117 12:02:01.400414 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.400821 kubelet[3567]: E0117 12:02:01.400478 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.424249 containerd[2073]: time="2025-01-17T12:02:01.424028525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b7f75cd6c-jfjv9,Uid:b0891d59-6798-49fe-8d90-7dfed41f5d9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"42df9b08ac68e7aad57dc4d367abd191f83398817f107108a5d4ff3c44785128\"" Jan 17 12:02:01.439364 kubelet[3567]: E0117 12:02:01.438649 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:01.441069 kubelet[3567]: W0117 12:02:01.439332 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:01.441445 kubelet[3567]: E0117 12:02:01.440491 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:01.442108 containerd[2073]: time="2025-01-17T12:02:01.442057889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:02:01.471287 containerd[2073]: time="2025-01-17T12:02:01.470689097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:01.473390 containerd[2073]: time="2025-01-17T12:02:01.473288621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:01.474100 containerd[2073]: time="2025-01-17T12:02:01.473862965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:01.475360 containerd[2073]: time="2025-01-17T12:02:01.474955973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:01.624464 containerd[2073]: time="2025-01-17T12:02:01.623618334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wt5nn,Uid:71e58cd2-1a4e-4c22-b703-e9e1204ddb70,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\"" Jan 17 12:02:03.052978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081768363.mount: Deactivated successfully. Jan 17 12:02:03.193986 kubelet[3567]: E0117 12:02:03.193934 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:04.008856 containerd[2073]: time="2025-01-17T12:02:04.008751402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:04.010414 containerd[2073]: time="2025-01-17T12:02:04.010341462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 17 12:02:04.011661 containerd[2073]: time="2025-01-17T12:02:04.011580030Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:04.015582 containerd[2073]: time="2025-01-17T12:02:04.015475146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:04.018140 containerd[2073]: time="2025-01-17T12:02:04.017056182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.574583177s" Jan 17 12:02:04.018140 containerd[2073]: time="2025-01-17T12:02:04.017107722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:02:04.018817 containerd[2073]: time="2025-01-17T12:02:04.018699006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:02:04.058714 containerd[2073]: time="2025-01-17T12:02:04.058630722Z" level=info msg="CreateContainer within sandbox \"42df9b08ac68e7aad57dc4d367abd191f83398817f107108a5d4ff3c44785128\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:02:04.086148 containerd[2073]: time="2025-01-17T12:02:04.086001942Z" level=info msg="CreateContainer within sandbox \"42df9b08ac68e7aad57dc4d367abd191f83398817f107108a5d4ff3c44785128\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2289d8a00f1085fa523c7353029bae0638dea1768d65490709a316124ec726b2\"" Jan 17 12:02:04.088126 containerd[2073]: time="2025-01-17T12:02:04.087326622Z" level=info msg="StartContainer for \"2289d8a00f1085fa523c7353029bae0638dea1768d65490709a316124ec726b2\"" Jan 17 12:02:04.224061 containerd[2073]: time="2025-01-17T12:02:04.223961119Z" level=info msg="StartContainer for \"2289d8a00f1085fa523c7353029bae0638dea1768d65490709a316124ec726b2\" returns successfully" Jan 17 12:02:04.496233 kubelet[3567]: E0117 12:02:04.495828 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.496233 kubelet[3567]: W0117 12:02:04.495869 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.496233 kubelet[3567]: E0117 12:02:04.495908 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.497287 kubelet[3567]: E0117 12:02:04.496585 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.497287 kubelet[3567]: W0117 12:02:04.496612 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.497287 kubelet[3567]: E0117 12:02:04.496669 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.497287 kubelet[3567]: E0117 12:02:04.497140 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.497287 kubelet[3567]: W0117 12:02:04.497159 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.497287 kubelet[3567]: E0117 12:02:04.497209 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.497611 kubelet[3567]: E0117 12:02:04.497595 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.497671 kubelet[3567]: W0117 12:02:04.497633 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.497671 kubelet[3567]: E0117 12:02:04.497661 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.498436 kubelet[3567]: E0117 12:02:04.498102 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.498436 kubelet[3567]: W0117 12:02:04.498131 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.498436 kubelet[3567]: E0117 12:02:04.498160 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.498925 kubelet[3567]: E0117 12:02:04.498523 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.498925 kubelet[3567]: W0117 12:02:04.498652 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.498925 kubelet[3567]: E0117 12:02:04.498688 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.499751 kubelet[3567]: E0117 12:02:04.499317 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.499751 kubelet[3567]: W0117 12:02:04.499359 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.499751 kubelet[3567]: E0117 12:02:04.499391 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.500479 kubelet[3567]: E0117 12:02:04.500443 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.500479 kubelet[3567]: W0117 12:02:04.500477 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.500651 kubelet[3567]: E0117 12:02:04.500621 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.501513 kubelet[3567]: E0117 12:02:04.501445 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.501513 kubelet[3567]: W0117 12:02:04.501498 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.502029 kubelet[3567]: E0117 12:02:04.501534 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.502170 kubelet[3567]: E0117 12:02:04.502119 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.502231 kubelet[3567]: W0117 12:02:04.502170 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.502231 kubelet[3567]: E0117 12:02:04.502202 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.502617 kubelet[3567]: E0117 12:02:04.502589 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.502710 kubelet[3567]: W0117 12:02:04.502616 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.502710 kubelet[3567]: E0117 12:02:04.502644 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.503098 kubelet[3567]: E0117 12:02:04.503048 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.503169 kubelet[3567]: W0117 12:02:04.503097 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.503169 kubelet[3567]: E0117 12:02:04.503125 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.503664 kubelet[3567]: E0117 12:02:04.503634 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.503664 kubelet[3567]: W0117 12:02:04.503662 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.503856 kubelet[3567]: E0117 12:02:04.503707 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.504253 kubelet[3567]: E0117 12:02:04.504215 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.504326 kubelet[3567]: W0117 12:02:04.504261 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.504326 kubelet[3567]: E0117 12:02:04.504289 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.504713 kubelet[3567]: E0117 12:02:04.504686 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.504841 kubelet[3567]: W0117 12:02:04.504711 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.504841 kubelet[3567]: E0117 12:02:04.504737 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.581663 kubelet[3567]: E0117 12:02:04.581621 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.581663 kubelet[3567]: W0117 12:02:04.581659 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.581926 kubelet[3567]: E0117 12:02:04.581698 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.582273 kubelet[3567]: E0117 12:02:04.582230 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.582273 kubelet[3567]: W0117 12:02:04.582259 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.582418 kubelet[3567]: E0117 12:02:04.582298 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.582854 kubelet[3567]: E0117 12:02:04.582823 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.582948 kubelet[3567]: W0117 12:02:04.582853 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.583027 kubelet[3567]: E0117 12:02:04.582949 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.583999 kubelet[3567]: E0117 12:02:04.583786 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.583999 kubelet[3567]: W0117 12:02:04.583841 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.583999 kubelet[3567]: E0117 12:02:04.583921 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.585965 kubelet[3567]: E0117 12:02:04.585892 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.585965 kubelet[3567]: W0117 12:02:04.585924 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.587011 kubelet[3567]: E0117 12:02:04.586912 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.587748 kubelet[3567]: E0117 12:02:04.587599 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.587748 kubelet[3567]: W0117 12:02:04.587627 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.587748 kubelet[3567]: E0117 12:02:04.587746 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.588856 kubelet[3567]: E0117 12:02:04.588687 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.588856 kubelet[3567]: W0117 12:02:04.588851 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.589219 kubelet[3567]: E0117 12:02:04.589058 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.589440 kubelet[3567]: E0117 12:02:04.589408 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.589532 kubelet[3567]: W0117 12:02:04.589438 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.589808 kubelet[3567]: E0117 12:02:04.589608 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.590516 kubelet[3567]: E0117 12:02:04.590467 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.590516 kubelet[3567]: W0117 12:02:04.590503 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.590889 kubelet[3567]: E0117 12:02:04.590849 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.590889 kubelet[3567]: W0117 12:02:04.590879 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.591809 kubelet[3567]: E0117 12:02:04.591173 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.591809 kubelet[3567]: W0117 12:02:04.591199 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.591809 kubelet[3567]: E0117 12:02:04.591228 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.591809 kubelet[3567]: E0117 12:02:04.591274 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.591809 kubelet[3567]: E0117 12:02:04.591575 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.591809 kubelet[3567]: W0117 12:02:04.591592 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.591809 kubelet[3567]: E0117 12:02:04.591618 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.592196 kubelet[3567]: E0117 12:02:04.591904 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.592196 kubelet[3567]: W0117 12:02:04.591921 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.592196 kubelet[3567]: E0117 12:02:04.591946 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.592345 kubelet[3567]: E0117 12:02:04.592232 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.592345 kubelet[3567]: W0117 12:02:04.592246 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.592345 kubelet[3567]: E0117 12:02:04.592269 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.592955 kubelet[3567]: E0117 12:02:04.592909 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.592955 kubelet[3567]: W0117 12:02:04.592941 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.593135 kubelet[3567]: E0117 12:02:04.592971 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.593135 kubelet[3567]: E0117 12:02:04.593016 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.593364 kubelet[3567]: E0117 12:02:04.593325 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.593364 kubelet[3567]: W0117 12:02:04.593353 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.593474 kubelet[3567]: E0117 12:02:04.593380 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.595060 kubelet[3567]: E0117 12:02:04.593814 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.595060 kubelet[3567]: W0117 12:02:04.593844 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.595060 kubelet[3567]: E0117 12:02:04.593873 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:04.595060 kubelet[3567]: E0117 12:02:04.594924 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:04.595060 kubelet[3567]: W0117 12:02:04.594948 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:04.595060 kubelet[3567]: E0117 12:02:04.594980 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.194610 kubelet[3567]: E0117 12:02:05.194538 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:05.422620 kubelet[3567]: I0117 12:02:05.422565 3567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:05.514169 kubelet[3567]: E0117 12:02:05.513966 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.515736 kubelet[3567]: W0117 12:02:05.514812 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.515736 kubelet[3567]: E0117 12:02:05.514882 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.515966 kubelet[3567]: E0117 12:02:05.515863 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.515966 kubelet[3567]: W0117 12:02:05.515912 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.515966 kubelet[3567]: E0117 12:02:05.515950 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.516861 kubelet[3567]: E0117 12:02:05.516824 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.516861 kubelet[3567]: W0117 12:02:05.516857 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.517008 kubelet[3567]: E0117 12:02:05.516892 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.517285 kubelet[3567]: E0117 12:02:05.517252 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.517349 kubelet[3567]: W0117 12:02:05.517279 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.517349 kubelet[3567]: E0117 12:02:05.517325 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.518104 kubelet[3567]: E0117 12:02:05.518043 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.518104 kubelet[3567]: W0117 12:02:05.518074 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.518104 kubelet[3567]: E0117 12:02:05.518104 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.518566 kubelet[3567]: E0117 12:02:05.518491 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.518566 kubelet[3567]: W0117 12:02:05.518509 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.518566 kubelet[3567]: E0117 12:02:05.518534 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.519331 kubelet[3567]: E0117 12:02:05.519299 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.519331 kubelet[3567]: W0117 12:02:05.519328 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.519585 kubelet[3567]: E0117 12:02:05.519359 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.519960 kubelet[3567]: E0117 12:02:05.519862 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.519960 kubelet[3567]: W0117 12:02:05.519889 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.519960 kubelet[3567]: E0117 12:02:05.519917 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.520739 kubelet[3567]: E0117 12:02:05.520646 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.520739 kubelet[3567]: W0117 12:02:05.520677 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.520739 kubelet[3567]: E0117 12:02:05.520709 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.521668 kubelet[3567]: E0117 12:02:05.521621 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.521668 kubelet[3567]: W0117 12:02:05.521652 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.521850 kubelet[3567]: E0117 12:02:05.521682 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.522459 kubelet[3567]: E0117 12:02:05.522001 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.522459 kubelet[3567]: W0117 12:02:05.522027 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.522459 kubelet[3567]: E0117 12:02:05.522054 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.522946 kubelet[3567]: E0117 12:02:05.522925 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.523005 kubelet[3567]: W0117 12:02:05.522948 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.523005 kubelet[3567]: E0117 12:02:05.522978 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.523669 kubelet[3567]: E0117 12:02:05.523635 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.523669 kubelet[3567]: W0117 12:02:05.523665 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.523968 kubelet[3567]: E0117 12:02:05.523695 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.524667 kubelet[3567]: E0117 12:02:05.524484 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.524667 kubelet[3567]: W0117 12:02:05.524514 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.524667 kubelet[3567]: E0117 12:02:05.524546 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.525473 kubelet[3567]: E0117 12:02:05.525276 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.525473 kubelet[3567]: W0117 12:02:05.525307 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.525473 kubelet[3567]: E0117 12:02:05.525339 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.594375 kubelet[3567]: E0117 12:02:05.594315 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.594375 kubelet[3567]: W0117 12:02:05.594356 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.594585 kubelet[3567]: E0117 12:02:05.594394 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.594966 kubelet[3567]: E0117 12:02:05.594917 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.594966 kubelet[3567]: W0117 12:02:05.594953 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.595092 kubelet[3567]: E0117 12:02:05.595004 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.595422 kubelet[3567]: E0117 12:02:05.595385 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.595422 kubelet[3567]: W0117 12:02:05.595414 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.595602 kubelet[3567]: E0117 12:02:05.595477 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.595955 kubelet[3567]: E0117 12:02:05.595917 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.595955 kubelet[3567]: W0117 12:02:05.595948 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.596084 kubelet[3567]: E0117 12:02:05.596002 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.596555 kubelet[3567]: E0117 12:02:05.596524 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.596555 kubelet[3567]: W0117 12:02:05.596553 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.596700 kubelet[3567]: E0117 12:02:05.596670 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.597454 kubelet[3567]: E0117 12:02:05.597354 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.597454 kubelet[3567]: W0117 12:02:05.597387 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.597454 kubelet[3567]: E0117 12:02:05.597427 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.597916 kubelet[3567]: E0117 12:02:05.597881 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.597916 kubelet[3567]: W0117 12:02:05.597912 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.598052 kubelet[3567]: E0117 12:02:05.597946 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.598715 kubelet[3567]: E0117 12:02:05.598483 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.598715 kubelet[3567]: W0117 12:02:05.598520 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.599097 containerd[2073]: time="2025-01-17T12:02:05.598665562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:05.600463 kubelet[3567]: E0117 12:02:05.599846 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.600463 kubelet[3567]: E0117 12:02:05.600244 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.600463 kubelet[3567]: W0117 12:02:05.600268 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.600463 kubelet[3567]: E0117 12:02:05.600334 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.601167 kubelet[3567]: E0117 12:02:05.600973 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.601167 kubelet[3567]: W0117 12:02:05.601001 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.601167 kubelet[3567]: E0117 12:02:05.601076 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.601341 containerd[2073]: time="2025-01-17T12:02:05.601268302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 17 12:02:05.601850 kubelet[3567]: E0117 12:02:05.601690 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.601850 kubelet[3567]: W0117 12:02:05.601720 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.601850 kubelet[3567]: E0117 12:02:05.601797 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.602405 kubelet[3567]: E0117 12:02:05.602310 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.602405 kubelet[3567]: W0117 12:02:05.602339 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.602405 kubelet[3567]: E0117 12:02:05.602397 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.602998 kubelet[3567]: E0117 12:02:05.602959 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.602998 kubelet[3567]: W0117 12:02:05.602994 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.603339 kubelet[3567]: E0117 12:02:05.603273 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.603496 containerd[2073]: time="2025-01-17T12:02:05.603413902Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:05.603994 kubelet[3567]: E0117 12:02:05.603952 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.603994 kubelet[3567]: W0117 12:02:05.603991 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.604347 kubelet[3567]: E0117 12:02:05.604047 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.605187 kubelet[3567]: E0117 12:02:05.605041 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.605187 kubelet[3567]: W0117 12:02:05.605089 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.605836 kubelet[3567]: E0117 12:02:05.605437 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.607229 kubelet[3567]: E0117 12:02:05.607079 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.607229 kubelet[3567]: W0117 12:02:05.607111 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.607229 kubelet[3567]: E0117 12:02:05.607173 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.608707 kubelet[3567]: E0117 12:02:05.608574 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.608707 kubelet[3567]: W0117 12:02:05.608612 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.608707 kubelet[3567]: E0117 12:02:05.608646 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.610707 containerd[2073]: time="2025-01-17T12:02:05.609918490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:05.610875 kubelet[3567]: E0117 12:02:05.610498 3567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:05.610875 kubelet[3567]: W0117 12:02:05.610538 3567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:05.610875 kubelet[3567]: E0117 12:02:05.610575 3567 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:05.611718 containerd[2073]: time="2025-01-17T12:02:05.611506954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.592749976s" Jan 17 12:02:05.611718 containerd[2073]: time="2025-01-17T12:02:05.611575162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:02:05.616089 containerd[2073]: time="2025-01-17T12:02:05.616030618Z" level=info msg="CreateContainer within sandbox \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:02:05.635609 containerd[2073]: time="2025-01-17T12:02:05.635544562Z" level=info msg="CreateContainer within sandbox \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"edf14d76fc0654f4f5800c9f7e59bf25e6a371d3f9522cef8e70e2a1c2a0caa9\"" Jan 17 12:02:05.636328 containerd[2073]: time="2025-01-17T12:02:05.636263386Z" level=info msg="StartContainer for \"edf14d76fc0654f4f5800c9f7e59bf25e6a371d3f9522cef8e70e2a1c2a0caa9\"" Jan 17 12:02:05.763282 containerd[2073]: time="2025-01-17T12:02:05.763199879Z" level=info msg="StartContainer for \"edf14d76fc0654f4f5800c9f7e59bf25e6a371d3f9522cef8e70e2a1c2a0caa9\" returns successfully" Jan 17 12:02:05.834139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf14d76fc0654f4f5800c9f7e59bf25e6a371d3f9522cef8e70e2a1c2a0caa9-rootfs.mount: Deactivated successfully. Jan 17 12:02:06.047892 containerd[2073]: time="2025-01-17T12:02:06.047757092Z" level=info msg="shim disconnected" id=edf14d76fc0654f4f5800c9f7e59bf25e6a371d3f9522cef8e70e2a1c2a0caa9 namespace=k8s.io Jan 17 12:02:06.048503 containerd[2073]: time="2025-01-17T12:02:06.048161876Z" level=warning msg="cleaning up after shim disconnected" id=edf14d76fc0654f4f5800c9f7e59bf25e6a371d3f9522cef8e70e2a1c2a0caa9 namespace=k8s.io Jan 17 12:02:06.048503 containerd[2073]: time="2025-01-17T12:02:06.048190928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:06.434147 containerd[2073]: time="2025-01-17T12:02:06.434067790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:02:06.464019 kubelet[3567]: I0117 12:02:06.463304 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6b7f75cd6c-jfjv9" podStartSLOduration=3.8843854650000003 podStartE2EDuration="6.46195303s" podCreationTimestamp="2025-01-17 12:02:00 +0000 UTC" firstStartedPulling="2025-01-17 12:02:01.439950713 +0000 UTC m=+27.553613262" lastFinishedPulling="2025-01-17 12:02:04.017518278 +0000 UTC m=+30.131180827" observedRunningTime="2025-01-17 12:02:04.453489932 +0000 UTC m=+30.567152649" watchObservedRunningTime="2025-01-17 12:02:06.46195303 +0000 UTC m=+32.575615591" Jan 17 12:02:07.194359 kubelet[3567]: E0117 12:02:07.194299 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:09.195080 kubelet[3567]: E0117 12:02:09.194420 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:11.195884 kubelet[3567]: E0117 12:02:11.194483 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:11.288986 containerd[2073]: time="2025-01-17T12:02:11.288908474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:11.291057 containerd[2073]: time="2025-01-17T12:02:11.290969846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:02:11.293496 containerd[2073]: time="2025-01-17T12:02:11.293390282Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:11.301791 containerd[2073]: time="2025-01-17T12:02:11.300386210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:11.301947 containerd[2073]: time="2025-01-17T12:02:11.301831130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.867679184s" Jan 17 12:02:11.301947 containerd[2073]: time="2025-01-17T12:02:11.301885022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:02:11.308370 containerd[2073]: time="2025-01-17T12:02:11.308321330Z" level=info msg="CreateContainer within sandbox \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:02:11.342067 containerd[2073]: time="2025-01-17T12:02:11.341985878Z" level=info msg="CreateContainer within sandbox \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"16c14c2339ae45131d67c535f145bb27caee9fd7ee962b3495e7e3758804c133\"" Jan 17 12:02:11.344802 containerd[2073]: time="2025-01-17T12:02:11.343140698Z" level=info msg="StartContainer for \"16c14c2339ae45131d67c535f145bb27caee9fd7ee962b3495e7e3758804c133\"" Jan 17 12:02:11.454475 containerd[2073]: time="2025-01-17T12:02:11.454315347Z" level=info msg="StartContainer for \"16c14c2339ae45131d67c535f145bb27caee9fd7ee962b3495e7e3758804c133\" returns successfully" Jan 17 12:02:12.301805 containerd[2073]: time="2025-01-17T12:02:12.300437799Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:02:12.348107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16c14c2339ae45131d67c535f145bb27caee9fd7ee962b3495e7e3758804c133-rootfs.mount: Deactivated successfully. Jan 17 12:02:12.392403 kubelet[3567]: I0117 12:02:12.390885 3567 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:02:12.442899 kubelet[3567]: I0117 12:02:12.442335 3567 topology_manager.go:215] "Topology Admit Handler" podUID="12fcbd63-a9de-44e8-887a-4b31951c12cf" podNamespace="kube-system" podName="coredns-76f75df574-qghwp" Jan 17 12:02:12.459240 kubelet[3567]: I0117 12:02:12.458515 3567 topology_manager.go:215] "Topology Admit Handler" podUID="732ab0f9-098f-4bef-81b9-1f8a3bd9d354" podNamespace="kube-system" podName="coredns-76f75df574-lgfzb" Jan 17 12:02:12.476723 kubelet[3567]: I0117 12:02:12.474436 3567 topology_manager.go:215] "Topology Admit Handler" podUID="cc92581e-93a8-44ff-8991-2aa4dd9c2b83" podNamespace="calico-system" podName="calico-kube-controllers-598d996f57-4zkw6" Jan 17 12:02:12.476723 kubelet[3567]: I0117 12:02:12.474722 3567 topology_manager.go:215] "Topology Admit Handler" podUID="6079291c-981f-420b-bb70-d9a6b4850e0e" podNamespace="calico-apiserver" podName="calico-apiserver-679485b9d8-dxmxx" Jan 17 12:02:12.520359 kubelet[3567]: I0117 12:02:12.519486 3567 topology_manager.go:215] "Topology Admit Handler" podUID="f5a7d8e3-a980-4aa2-824f-1fccea03f32f" podNamespace="calico-apiserver" podName="calico-apiserver-679485b9d8-rm8d6" Jan 17 12:02:12.549713 kubelet[3567]: I0117 12:02:12.549652 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgl78\" (UniqueName: \"kubernetes.io/projected/732ab0f9-098f-4bef-81b9-1f8a3bd9d354-kube-api-access-xgl78\") pod \"coredns-76f75df574-lgfzb\" (UID: \"732ab0f9-098f-4bef-81b9-1f8a3bd9d354\") " pod="kube-system/coredns-76f75df574-lgfzb" Jan 17 12:02:12.550141 kubelet[3567]: I0117 12:02:12.549744 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2q7\" (UniqueName: \"kubernetes.io/projected/cc92581e-93a8-44ff-8991-2aa4dd9c2b83-kube-api-access-wm2q7\") pod \"calico-kube-controllers-598d996f57-4zkw6\" (UID: \"cc92581e-93a8-44ff-8991-2aa4dd9c2b83\") " pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" Jan 17 12:02:12.550141 kubelet[3567]: I0117 12:02:12.549825 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jtxn\" (UniqueName: \"kubernetes.io/projected/6079291c-981f-420b-bb70-d9a6b4850e0e-kube-api-access-8jtxn\") pod \"calico-apiserver-679485b9d8-dxmxx\" (UID: \"6079291c-981f-420b-bb70-d9a6b4850e0e\") " pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" Jan 17 12:02:12.550141 kubelet[3567]: I0117 12:02:12.549909 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc92581e-93a8-44ff-8991-2aa4dd9c2b83-tigera-ca-bundle\") pod \"calico-kube-controllers-598d996f57-4zkw6\" (UID: \"cc92581e-93a8-44ff-8991-2aa4dd9c2b83\") " pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" Jan 17 12:02:12.550141 kubelet[3567]: I0117 12:02:12.549964 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12fcbd63-a9de-44e8-887a-4b31951c12cf-config-volume\") pod \"coredns-76f75df574-qghwp\" (UID: \"12fcbd63-a9de-44e8-887a-4b31951c12cf\") " pod="kube-system/coredns-76f75df574-qghwp" Jan 17 12:02:12.550394 kubelet[3567]: I0117 12:02:12.550188 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6079291c-981f-420b-bb70-d9a6b4850e0e-calico-apiserver-certs\") pod \"calico-apiserver-679485b9d8-dxmxx\" (UID: \"6079291c-981f-420b-bb70-d9a6b4850e0e\") " pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" Jan 17 12:02:12.550394 kubelet[3567]: I0117 12:02:12.550294 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkc9\" (UniqueName: \"kubernetes.io/projected/12fcbd63-a9de-44e8-887a-4b31951c12cf-kube-api-access-cxkc9\") pod \"coredns-76f75df574-qghwp\" (UID: \"12fcbd63-a9de-44e8-887a-4b31951c12cf\") " pod="kube-system/coredns-76f75df574-qghwp" Jan 17 12:02:12.550394 kubelet[3567]: I0117 12:02:12.550349 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/732ab0f9-098f-4bef-81b9-1f8a3bd9d354-config-volume\") pod \"coredns-76f75df574-lgfzb\" (UID: \"732ab0f9-098f-4bef-81b9-1f8a3bd9d354\") " pod="kube-system/coredns-76f75df574-lgfzb" Jan 17 12:02:12.656917 kubelet[3567]: I0117 12:02:12.650830 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5a7d8e3-a980-4aa2-824f-1fccea03f32f-calico-apiserver-certs\") pod \"calico-apiserver-679485b9d8-rm8d6\" (UID: \"f5a7d8e3-a980-4aa2-824f-1fccea03f32f\") " pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" Jan 17 12:02:12.656917 kubelet[3567]: I0117 12:02:12.650962 3567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwqlx\" (UniqueName: \"kubernetes.io/projected/f5a7d8e3-a980-4aa2-824f-1fccea03f32f-kube-api-access-nwqlx\") pod \"calico-apiserver-679485b9d8-rm8d6\" (UID: \"f5a7d8e3-a980-4aa2-824f-1fccea03f32f\") " pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" Jan 17 12:02:12.791804 containerd[2073]: time="2025-01-17T12:02:12.790662834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qghwp,Uid:12fcbd63-a9de-44e8-887a-4b31951c12cf,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:12.806301 containerd[2073]: time="2025-01-17T12:02:12.806222970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgfzb,Uid:732ab0f9-098f-4bef-81b9-1f8a3bd9d354,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:12.820336 containerd[2073]: time="2025-01-17T12:02:12.818640114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d996f57-4zkw6,Uid:cc92581e-93a8-44ff-8991-2aa4dd9c2b83,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:12.820336 containerd[2073]: time="2025-01-17T12:02:12.819064506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-dxmxx,Uid:6079291c-981f-420b-bb70-d9a6b4850e0e,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:02:12.829204 containerd[2073]: time="2025-01-17T12:02:12.829130310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-rm8d6,Uid:f5a7d8e3-a980-4aa2-824f-1fccea03f32f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:02:13.202809 containerd[2073]: time="2025-01-17T12:02:13.200968312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b84,Uid:4460c615-c8f8-4ea0-acd3-2a02aa651b6c,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:13.294300 containerd[2073]: time="2025-01-17T12:02:13.294192520Z" level=info msg="shim disconnected" id=16c14c2339ae45131d67c535f145bb27caee9fd7ee962b3495e7e3758804c133 namespace=k8s.io Jan 17 12:02:13.294300 containerd[2073]: time="2025-01-17T12:02:13.294274180Z" level=warning msg="cleaning up after shim disconnected" id=16c14c2339ae45131d67c535f145bb27caee9fd7ee962b3495e7e3758804c133 namespace=k8s.io Jan 17 12:02:13.294300 containerd[2073]: time="2025-01-17T12:02:13.294302752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:13.325198 containerd[2073]: time="2025-01-17T12:02:13.325071172Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:02:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:02:13.565039 containerd[2073]: time="2025-01-17T12:02:13.564983430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:02:13.745114 containerd[2073]: time="2025-01-17T12:02:13.745044978Z" level=error msg="Failed to destroy network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.751283 containerd[2073]: time="2025-01-17T12:02:13.751215066Z" level=error msg="encountered an error cleaning up failed sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.751818 containerd[2073]: time="2025-01-17T12:02:13.751580658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qghwp,Uid:12fcbd63-a9de-44e8-887a-4b31951c12cf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.753952 kubelet[3567]: E0117 12:02:13.753724 3567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.755463 kubelet[3567]: E0117 12:02:13.755131 3567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qghwp" Jan 17 12:02:13.755463 kubelet[3567]: E0117 12:02:13.755187 3567 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qghwp" Jan 17 12:02:13.755463 kubelet[3567]: E0117 12:02:13.755298 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qghwp_kube-system(12fcbd63-a9de-44e8-887a-4b31951c12cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qghwp_kube-system(12fcbd63-a9de-44e8-887a-4b31951c12cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qghwp" podUID="12fcbd63-a9de-44e8-887a-4b31951c12cf" Jan 17 12:02:13.754930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6-shm.mount: Deactivated successfully. Jan 17 12:02:13.798088 containerd[2073]: time="2025-01-17T12:02:13.797043847Z" level=error msg="Failed to destroy network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.800475 containerd[2073]: time="2025-01-17T12:02:13.800233483Z" level=error msg="encountered an error cleaning up failed sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.800475 containerd[2073]: time="2025-01-17T12:02:13.800330311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d996f57-4zkw6,Uid:cc92581e-93a8-44ff-8991-2aa4dd9c2b83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.802591 kubelet[3567]: E0117 12:02:13.802214 3567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.802591 kubelet[3567]: E0117 12:02:13.802325 3567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" Jan 17 12:02:13.802591 kubelet[3567]: E0117 12:02:13.802391 3567 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" Jan 17 12:02:13.803119 kubelet[3567]: E0117 12:02:13.802516 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-598d996f57-4zkw6_calico-system(cc92581e-93a8-44ff-8991-2aa4dd9c2b83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-598d996f57-4zkw6_calico-system(cc92581e-93a8-44ff-8991-2aa4dd9c2b83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" podUID="cc92581e-93a8-44ff-8991-2aa4dd9c2b83" Jan 17 12:02:13.817619 containerd[2073]: time="2025-01-17T12:02:13.817449607Z" level=error msg="Failed to destroy network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.820819 containerd[2073]: time="2025-01-17T12:02:13.820397995Z" level=error msg="encountered an error cleaning up failed sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.820819 containerd[2073]: time="2025-01-17T12:02:13.820499287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgfzb,Uid:732ab0f9-098f-4bef-81b9-1f8a3bd9d354,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.821264 kubelet[3567]: E0117 12:02:13.821144 3567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.821264 kubelet[3567]: E0117 12:02:13.821218 3567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-lgfzb" Jan 17 12:02:13.821264 kubelet[3567]: E0117 12:02:13.821267 3567 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-lgfzb" Jan 17 12:02:13.822750 kubelet[3567]: E0117 12:02:13.821365 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-lgfzb_kube-system(732ab0f9-098f-4bef-81b9-1f8a3bd9d354)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-lgfzb_kube-system(732ab0f9-098f-4bef-81b9-1f8a3bd9d354)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-lgfzb" podUID="732ab0f9-098f-4bef-81b9-1f8a3bd9d354" Jan 17 12:02:13.844505 containerd[2073]: time="2025-01-17T12:02:13.844420555Z" level=error msg="Failed to destroy network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.846228 containerd[2073]: time="2025-01-17T12:02:13.845894131Z" level=error msg="encountered an error cleaning up failed sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.846392 containerd[2073]: time="2025-01-17T12:02:13.846286375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b84,Uid:4460c615-c8f8-4ea0-acd3-2a02aa651b6c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.847316 kubelet[3567]: E0117 12:02:13.847073 3567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.847316 kubelet[3567]: E0117 12:02:13.847210 3567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:13.847316 kubelet[3567]: E0117 12:02:13.847251 3567 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7b84" Jan 17 12:02:13.848068 containerd[2073]: time="2025-01-17T12:02:13.847143283Z" level=error msg="Failed to destroy network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.848141 kubelet[3567]: E0117 12:02:13.847386 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q7b84_calico-system(4460c615-c8f8-4ea0-acd3-2a02aa651b6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q7b84_calico-system(4460c615-c8f8-4ea0-acd3-2a02aa651b6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:13.849618 containerd[2073]: time="2025-01-17T12:02:13.848905387Z" level=error msg="encountered an error cleaning up failed sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.849618 containerd[2073]: time="2025-01-17T12:02:13.848998483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-dxmxx,Uid:6079291c-981f-420b-bb70-d9a6b4850e0e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.849942 kubelet[3567]: E0117 12:02:13.849330 3567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.849942 kubelet[3567]: E0117 12:02:13.849402 3567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" Jan 17 12:02:13.849942 kubelet[3567]: E0117 12:02:13.849447 3567 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" Jan 17 12:02:13.850139 kubelet[3567]: E0117 12:02:13.849569 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-679485b9d8-dxmxx_calico-apiserver(6079291c-981f-420b-bb70-d9a6b4850e0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-679485b9d8-dxmxx_calico-apiserver(6079291c-981f-420b-bb70-d9a6b4850e0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" podUID="6079291c-981f-420b-bb70-d9a6b4850e0e" Jan 17 12:02:13.864816 containerd[2073]: time="2025-01-17T12:02:13.864558643Z" level=error msg="Failed to destroy network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.865305 containerd[2073]: time="2025-01-17T12:02:13.865213939Z" level=error msg="encountered an error cleaning up failed sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.865461 containerd[2073]: time="2025-01-17T12:02:13.865334815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-rm8d6,Uid:f5a7d8e3-a980-4aa2-824f-1fccea03f32f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.865754 kubelet[3567]: E0117 12:02:13.865671 3567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:13.865754 kubelet[3567]: E0117 12:02:13.865742 3567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" Jan 17 12:02:13.865942 kubelet[3567]: E0117 12:02:13.865846 3567 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" Jan 17 12:02:13.866007 kubelet[3567]: E0117 12:02:13.865985 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-679485b9d8-rm8d6_calico-apiserver(f5a7d8e3-a980-4aa2-824f-1fccea03f32f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-679485b9d8-rm8d6_calico-apiserver(f5a7d8e3-a980-4aa2-824f-1fccea03f32f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" podUID="f5a7d8e3-a980-4aa2-824f-1fccea03f32f" Jan 17 12:02:14.346757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a-shm.mount: Deactivated successfully. Jan 17 12:02:14.347079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1-shm.mount: Deactivated successfully. Jan 17 12:02:14.347309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd-shm.mount: Deactivated successfully. Jan 17 12:02:14.347580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e-shm.mount: Deactivated successfully. Jan 17 12:02:14.347842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346-shm.mount: Deactivated successfully. Jan 17 12:02:14.463502 kubelet[3567]: I0117 12:02:14.463408 3567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:14.558669 kubelet[3567]: I0117 12:02:14.558635 3567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:14.561396 containerd[2073]: time="2025-01-17T12:02:14.560919618Z" level=info msg="StopPodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\"" Jan 17 12:02:14.561396 containerd[2073]: time="2025-01-17T12:02:14.561237486Z" level=info msg="Ensure that sandbox caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1 in task-service has been cleanup successfully" Jan 17 12:02:14.569076 kubelet[3567]: I0117 12:02:14.569019 3567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:14.572554 containerd[2073]: time="2025-01-17T12:02:14.572179555Z" level=info msg="StopPodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\"" Jan 17 12:02:14.572554 containerd[2073]: time="2025-01-17T12:02:14.572507659Z" level=info msg="Ensure that sandbox eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346 in task-service has been cleanup successfully" Jan 17 12:02:14.577330 kubelet[3567]: I0117 12:02:14.577067 3567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:14.583072 containerd[2073]: time="2025-01-17T12:02:14.582624475Z" level=info msg="StopPodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\"" Jan 17 12:02:14.587061 containerd[2073]: time="2025-01-17T12:02:14.586494559Z" level=info msg="Ensure that sandbox c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd in task-service has been cleanup successfully" Jan 17 12:02:14.590401 kubelet[3567]: I0117 12:02:14.590179 3567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:14.602714 containerd[2073]: time="2025-01-17T12:02:14.602316415Z" level=info msg="StopPodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\"" Jan 17 12:02:14.609594 containerd[2073]: time="2025-01-17T12:02:14.608866027Z" level=info msg="Ensure that sandbox 76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e in task-service has been cleanup successfully" Jan 17 12:02:14.614201 kubelet[3567]: I0117 12:02:14.614140 3567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:14.623554 containerd[2073]: time="2025-01-17T12:02:14.623484319Z" level=info msg="StopPodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\"" Jan 17 12:02:14.625102 kubelet[3567]: I0117 12:02:14.624894 3567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:14.626910 containerd[2073]: time="2025-01-17T12:02:14.626854759Z" level=info msg="Ensure that sandbox 9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6 in task-service has been cleanup successfully" Jan 17 12:02:14.631998 containerd[2073]: time="2025-01-17T12:02:14.629330263Z" level=info msg="StopPodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\"" Jan 17 12:02:14.634609 containerd[2073]: time="2025-01-17T12:02:14.634383031Z" level=info msg="Ensure that sandbox fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a in task-service has been cleanup successfully" Jan 17 12:02:14.799583 containerd[2073]: time="2025-01-17T12:02:14.799504388Z" level=error msg="StopPodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" failed" error="failed to destroy network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:14.799934 kubelet[3567]: E0117 12:02:14.799887 3567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:14.800676 kubelet[3567]: E0117 12:02:14.800000 3567 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346"} Jan 17 12:02:14.800676 kubelet[3567]: E0117 12:02:14.800070 3567 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6079291c-981f-420b-bb70-d9a6b4850e0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:14.800676 kubelet[3567]: E0117 12:02:14.800125 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6079291c-981f-420b-bb70-d9a6b4850e0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" podUID="6079291c-981f-420b-bb70-d9a6b4850e0e" Jan 17 12:02:14.825974 containerd[2073]: time="2025-01-17T12:02:14.825414404Z" level=error msg="StopPodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" failed" error="failed to destroy network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:14.826364 containerd[2073]: time="2025-01-17T12:02:14.826114232Z" level=error msg="StopPodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" failed" error="failed to destroy network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:14.826799 kubelet[3567]: E0117 12:02:14.826401 3567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:14.826799 kubelet[3567]: E0117 12:02:14.826466 3567 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1"} Jan 17 12:02:14.826799 kubelet[3567]: E0117 12:02:14.826538 3567 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5a7d8e3-a980-4aa2-824f-1fccea03f32f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:14.826799 kubelet[3567]: E0117 12:02:14.826595 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5a7d8e3-a980-4aa2-824f-1fccea03f32f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" podUID="f5a7d8e3-a980-4aa2-824f-1fccea03f32f" Jan 17 12:02:14.827728 kubelet[3567]: E0117 12:02:14.827100 3567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:14.827728 kubelet[3567]: E0117 12:02:14.827151 3567 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e"} Jan 17 12:02:14.827728 kubelet[3567]: E0117 12:02:14.827214 3567 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"732ab0f9-098f-4bef-81b9-1f8a3bd9d354\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:14.827728 kubelet[3567]: E0117 12:02:14.827268 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"732ab0f9-098f-4bef-81b9-1f8a3bd9d354\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-lgfzb" podUID="732ab0f9-098f-4bef-81b9-1f8a3bd9d354" Jan 17 12:02:14.830063 containerd[2073]: time="2025-01-17T12:02:14.829841624Z" level=error msg="StopPodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" failed" error="failed to destroy network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:14.831908 kubelet[3567]: E0117 12:02:14.831840 3567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:14.832084 kubelet[3567]: E0117 12:02:14.831935 3567 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd"} Jan 17 12:02:14.832084 kubelet[3567]: E0117 12:02:14.831999 3567 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc92581e-93a8-44ff-8991-2aa4dd9c2b83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:14.832236 kubelet[3567]: E0117 12:02:14.832085 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc92581e-93a8-44ff-8991-2aa4dd9c2b83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" podUID="cc92581e-93a8-44ff-8991-2aa4dd9c2b83" Jan 17 12:02:14.836374 containerd[2073]: time="2025-01-17T12:02:14.835170452Z" level=error msg="StopPodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" failed" error="failed to destroy network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:14.836512 kubelet[3567]: E0117 12:02:14.835553 3567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:14.836512 kubelet[3567]: E0117 12:02:14.835615 3567 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6"} Jan 17 12:02:14.836512 kubelet[3567]: E0117 12:02:14.835680 3567 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12fcbd63-a9de-44e8-887a-4b31951c12cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:14.836512 kubelet[3567]: E0117 12:02:14.835752 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12fcbd63-a9de-44e8-887a-4b31951c12cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qghwp" podUID="12fcbd63-a9de-44e8-887a-4b31951c12cf" Jan 17 12:02:14.843212 containerd[2073]: time="2025-01-17T12:02:14.843115556Z" level=error msg="StopPodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" failed" error="failed to destroy network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:14.844044 kubelet[3567]: E0117 12:02:14.843755 3567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:14.844044 kubelet[3567]: E0117 12:02:14.843835 3567 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a"} Jan 17 12:02:14.844044 kubelet[3567]: E0117 12:02:14.843898 3567 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:14.844044 kubelet[3567]: E0117 12:02:14.843956 3567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4460c615-c8f8-4ea0-acd3-2a02aa651b6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7b84" podUID="4460c615-c8f8-4ea0-acd3-2a02aa651b6c" Jan 17 12:02:19.333512 systemd[1]: Started sshd@7-172.31.23.128:22-139.178.68.195:45816.service - OpenSSH per-connection server daemon (139.178.68.195:45816). Jan 17 12:02:19.540119 sshd[4660]: Accepted publickey for core from 139.178.68.195 port 45816 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:19.543922 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:19.555417 systemd-logind[2042]: New session 8 of user core. Jan 17 12:02:19.561564 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:02:19.954081 sshd[4660]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:19.965258 systemd[1]: sshd@7-172.31.23.128:22-139.178.68.195:45816.service: Deactivated successfully. Jan 17 12:02:19.974755 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:02:19.977294 systemd-logind[2042]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:02:19.979449 systemd-logind[2042]: Removed session 8. Jan 17 12:02:22.381753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152564759.mount: Deactivated successfully. Jan 17 12:02:22.478559 containerd[2073]: time="2025-01-17T12:02:22.477856934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:22.480567 containerd[2073]: time="2025-01-17T12:02:22.480442790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:02:22.484362 containerd[2073]: time="2025-01-17T12:02:22.484274846Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:22.491301 containerd[2073]: time="2025-01-17T12:02:22.491128442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:22.492906 containerd[2073]: time="2025-01-17T12:02:22.492595646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.925460196s" Jan 17 12:02:22.492906 containerd[2073]: time="2025-01-17T12:02:22.492703490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:02:22.539583 containerd[2073]: time="2025-01-17T12:02:22.539520026Z" level=info msg="CreateContainer within sandbox \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:02:22.585816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764670850.mount: Deactivated successfully. Jan 17 12:02:22.589655 containerd[2073]: time="2025-01-17T12:02:22.589579406Z" level=info msg="CreateContainer within sandbox \"9ce35e0cd8078bc8c35982d216625b65f8f3a5465c4555f9bb9026da681f09ee\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bf9d06b6c01ce9cf75fe53234e17d76a6d9174a8562921febd62ebf0d388df5e\"" Jan 17 12:02:22.592188 containerd[2073]: time="2025-01-17T12:02:22.591059582Z" level=info msg="StartContainer for \"bf9d06b6c01ce9cf75fe53234e17d76a6d9174a8562921febd62ebf0d388df5e\"" Jan 17 12:02:22.730979 containerd[2073]: time="2025-01-17T12:02:22.730740051Z" level=info msg="StartContainer for \"bf9d06b6c01ce9cf75fe53234e17d76a6d9174a8562921febd62ebf0d388df5e\" returns successfully" Jan 17 12:02:22.866674 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:02:22.866843 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:02:23.731593 kubelet[3567]: I0117 12:02:23.731533 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-wt5nn" podStartSLOduration=2.866211908 podStartE2EDuration="23.730887208s" podCreationTimestamp="2025-01-17 12:02:00 +0000 UTC" firstStartedPulling="2025-01-17 12:02:01.628506354 +0000 UTC m=+27.742168903" lastFinishedPulling="2025-01-17 12:02:22.493181654 +0000 UTC m=+48.606844203" observedRunningTime="2025-01-17 12:02:23.724199524 +0000 UTC m=+49.837862097" watchObservedRunningTime="2025-01-17 12:02:23.730887208 +0000 UTC m=+49.844549841" Jan 17 12:02:24.989241 systemd[1]: Started sshd@8-172.31.23.128:22-139.178.68.195:57112.service - OpenSSH per-connection server daemon (139.178.68.195:57112). Jan 17 12:02:25.107805 kernel: bpftool[4908]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:02:25.229402 sshd[4893]: Accepted publickey for core from 139.178.68.195 port 57112 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:25.233150 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:25.242055 systemd-logind[2042]: New session 9 of user core. Jan 17 12:02:25.252472 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:02:25.462537 (udev-worker)[4718]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:25.471664 systemd-networkd[1608]: vxlan.calico: Link UP Jan 17 12:02:25.471678 systemd-networkd[1608]: vxlan.calico: Gained carrier Jan 17 12:02:25.515741 (udev-worker)[4716]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:25.624197 sshd[4893]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:25.633202 systemd-logind[2042]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:02:25.635619 systemd[1]: sshd@8-172.31.23.128:22-139.178.68.195:57112.service: Deactivated successfully. Jan 17 12:02:25.647467 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:02:25.652904 systemd-logind[2042]: Removed session 9. Jan 17 12:02:25.744847 systemd[1]: run-containerd-runc-k8s.io-bf9d06b6c01ce9cf75fe53234e17d76a6d9174a8562921febd62ebf0d388df5e-runc.jHdeL1.mount: Deactivated successfully. Jan 17 12:02:26.197445 containerd[2073]: time="2025-01-17T12:02:26.197025400Z" level=info msg="StopPodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\"" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.344 [INFO][5026] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.345 [INFO][5026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" iface="eth0" netns="/var/run/netns/cni-7b12e67d-f900-6df1-f298-1fca199c8402" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.348 [INFO][5026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" iface="eth0" netns="/var/run/netns/cni-7b12e67d-f900-6df1-f298-1fca199c8402" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.351 [INFO][5026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" iface="eth0" netns="/var/run/netns/cni-7b12e67d-f900-6df1-f298-1fca199c8402" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.351 [INFO][5026] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.351 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.402 [INFO][5032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.403 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.403 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.417 [WARNING][5032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.418 [INFO][5032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.420 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:26.428429 containerd[2073]: 2025-01-17 12:02:26.425 [INFO][5026] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:26.430904 containerd[2073]: time="2025-01-17T12:02:26.430353137Z" level=info msg="TearDown network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" successfully" Jan 17 12:02:26.430904 containerd[2073]: time="2025-01-17T12:02:26.430411313Z" level=info msg="StopPodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" returns successfully" Jan 17 12:02:26.437157 systemd[1]: run-netns-cni\x2d7b12e67d\x2df900\x2d6df1\x2df298\x2d1fca199c8402.mount: Deactivated successfully. Jan 17 12:02:26.437736 containerd[2073]: time="2025-01-17T12:02:26.437209445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qghwp,Uid:12fcbd63-a9de-44e8-887a-4b31951c12cf,Namespace:kube-system,Attempt:1,}" Jan 17 12:02:26.680749 systemd-networkd[1608]: cali57f53f38356: Link UP Jan 17 12:02:26.683192 systemd-networkd[1608]: cali57f53f38356: Gained carrier Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.550 [INFO][5041] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0 coredns-76f75df574- kube-system 12fcbd63-a9de-44e8-887a-4b31951c12cf 838 0 2025-01-17 12:01:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-128 coredns-76f75df574-qghwp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57f53f38356 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.550 [INFO][5041] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.607 [INFO][5052] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" HandleID="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.623 [INFO][5052] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" HandleID="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-128", "pod":"coredns-76f75df574-qghwp", "timestamp":"2025-01-17 12:02:26.607358898 +0000 UTC"}, Hostname:"ip-172-31-23-128", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.624 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.624 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.624 [INFO][5052] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-128' Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.627 [INFO][5052] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.633 [INFO][5052] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.643 [INFO][5052] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.646 [INFO][5052] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.649 [INFO][5052] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.649 [INFO][5052] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.652 [INFO][5052] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.660 [INFO][5052] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.669 [INFO][5052] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.1/26] block=192.168.62.0/26 handle="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.669 [INFO][5052] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.1/26] handle="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" host="ip-172-31-23-128" Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.670 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:26.712015 containerd[2073]: 2025-01-17 12:02:26.670 [INFO][5052] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.1/26] IPv6=[] ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" HandleID="k8s-pod-network.12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.716793 containerd[2073]: 2025-01-17 12:02:26.673 [INFO][5041] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12fcbd63-a9de-44e8-887a-4b31951c12cf", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"", Pod:"coredns-76f75df574-qghwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57f53f38356", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:26.716793 containerd[2073]: 2025-01-17 12:02:26.673 [INFO][5041] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.1/32] ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.716793 containerd[2073]: 2025-01-17 12:02:26.674 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57f53f38356 ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.716793 containerd[2073]: 2025-01-17 12:02:26.684 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.716793 containerd[2073]: 2025-01-17 12:02:26.684 [INFO][5041] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12fcbd63-a9de-44e8-887a-4b31951c12cf", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a", Pod:"coredns-76f75df574-qghwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57f53f38356", MAC:"b6:c2:b8:6f:a8:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:26.716793 containerd[2073]: 2025-01-17 12:02:26.704 [INFO][5041] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a" Namespace="kube-system" Pod="coredns-76f75df574-qghwp" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:26.763809 containerd[2073]: time="2025-01-17T12:02:26.761722447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:26.763809 containerd[2073]: time="2025-01-17T12:02:26.762950431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:26.763809 containerd[2073]: time="2025-01-17T12:02:26.762982423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:26.763809 containerd[2073]: time="2025-01-17T12:02:26.763158295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:26.879793 containerd[2073]: time="2025-01-17T12:02:26.879712976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qghwp,Uid:12fcbd63-a9de-44e8-887a-4b31951c12cf,Namespace:kube-system,Attempt:1,} returns sandbox id \"12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a\"" Jan 17 12:02:26.887592 containerd[2073]: time="2025-01-17T12:02:26.887423888Z" level=info msg="CreateContainer within sandbox \"12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:02:26.918714 containerd[2073]: time="2025-01-17T12:02:26.918620276Z" level=info msg="CreateContainer within sandbox \"12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7f13e9d5f3c5751c808abc01edf419be1592666f961e50451bce7ede63cad1b\"" Jan 17 12:02:26.920791 containerd[2073]: time="2025-01-17T12:02:26.919942112Z" level=info msg="StartContainer for \"a7f13e9d5f3c5751c808abc01edf419be1592666f961e50451bce7ede63cad1b\"" Jan 17 12:02:27.021258 containerd[2073]: time="2025-01-17T12:02:27.020639896Z" level=info msg="StartContainer for \"a7f13e9d5f3c5751c808abc01edf419be1592666f961e50451bce7ede63cad1b\" returns successfully" Jan 17 12:02:27.147203 systemd-networkd[1608]: vxlan.calico: Gained IPv6LL Jan 17 12:02:27.196115 containerd[2073]: time="2025-01-17T12:02:27.196035773Z" level=info msg="StopPodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\"" Jan 17 12:02:27.196657 containerd[2073]: time="2025-01-17T12:02:27.196613981Z" level=info msg="StopPodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\"" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.324 [INFO][5174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.324 [INFO][5174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" iface="eth0" netns="/var/run/netns/cni-f0be9077-7d0a-87d4-b735-f24aed57e463" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.329 [INFO][5174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" iface="eth0" netns="/var/run/netns/cni-f0be9077-7d0a-87d4-b735-f24aed57e463" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.330 [INFO][5174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" iface="eth0" netns="/var/run/netns/cni-f0be9077-7d0a-87d4-b735-f24aed57e463" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.330 [INFO][5174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.330 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.399 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.400 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.400 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.413 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.413 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.417 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:27.426469 containerd[2073]: 2025-01-17 12:02:27.422 [INFO][5174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:27.431319 containerd[2073]: time="2025-01-17T12:02:27.426520470Z" level=info msg="TearDown network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" successfully" Jan 17 12:02:27.431319 containerd[2073]: time="2025-01-17T12:02:27.426588150Z" level=info msg="StopPodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" returns successfully" Jan 17 12:02:27.431319 containerd[2073]: time="2025-01-17T12:02:27.427713942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-rm8d6,Uid:f5a7d8e3-a980-4aa2-824f-1fccea03f32f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:02:27.444162 systemd[1]: run-containerd-runc-k8s.io-12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a-runc.ypkGJH.mount: Deactivated successfully. Jan 17 12:02:27.446593 systemd[1]: run-netns-cni\x2df0be9077\x2d7d0a\x2d87d4\x2db735\x2df24aed57e463.mount: Deactivated successfully. Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.339 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.341 [INFO][5175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" iface="eth0" netns="/var/run/netns/cni-5ae19e45-b3a2-ec68-b194-09fa5ed987fc" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.342 [INFO][5175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" iface="eth0" netns="/var/run/netns/cni-5ae19e45-b3a2-ec68-b194-09fa5ed987fc" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.342 [INFO][5175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" iface="eth0" netns="/var/run/netns/cni-5ae19e45-b3a2-ec68-b194-09fa5ed987fc" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.342 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.342 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.427 [INFO][5191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.428 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.428 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.449 [WARNING][5191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.449 [INFO][5191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.455 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:27.462829 containerd[2073]: 2025-01-17 12:02:27.459 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:27.466581 containerd[2073]: time="2025-01-17T12:02:27.466218775Z" level=info msg="TearDown network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" successfully" Jan 17 12:02:27.466581 containerd[2073]: time="2025-01-17T12:02:27.466270255Z" level=info msg="StopPodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" returns successfully" Jan 17 12:02:27.469463 containerd[2073]: time="2025-01-17T12:02:27.469151695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b84,Uid:4460c615-c8f8-4ea0-acd3-2a02aa651b6c,Namespace:calico-system,Attempt:1,}" Jan 17 12:02:27.488275 systemd[1]: run-netns-cni\x2d5ae19e45\x2db3a2\x2dec68\x2db194\x2d09fa5ed987fc.mount: Deactivated successfully. Jan 17 12:02:27.794906 kubelet[3567]: I0117 12:02:27.793079 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qghwp" podStartSLOduration=39.793018052 podStartE2EDuration="39.793018052s" podCreationTimestamp="2025-01-17 12:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:27.747557924 +0000 UTC m=+53.861220641" watchObservedRunningTime="2025-01-17 12:02:27.793018052 +0000 UTC m=+53.906680601" Jan 17 12:02:27.881310 systemd-networkd[1608]: cali8c0d552291e: Link UP Jan 17 12:02:27.883837 systemd-networkd[1608]: cali8c0d552291e: Gained carrier Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.608 [INFO][5199] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0 calico-apiserver-679485b9d8- calico-apiserver f5a7d8e3-a980-4aa2-824f-1fccea03f32f 849 0 2025-01-17 12:02:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:679485b9d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-128 calico-apiserver-679485b9d8-rm8d6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c0d552291e [] []}} ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.608 [INFO][5199] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.689 [INFO][5221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" HandleID="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.726 [INFO][5221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" HandleID="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-128", "pod":"calico-apiserver-679485b9d8-rm8d6", "timestamp":"2025-01-17 12:02:27.689868944 +0000 UTC"}, Hostname:"ip-172-31-23-128", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.726 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.727 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.727 [INFO][5221] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-128' Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.731 [INFO][5221] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.762 [INFO][5221] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.786 [INFO][5221] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.793 [INFO][5221] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.805 [INFO][5221] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.805 [INFO][5221] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.816 [INFO][5221] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4 Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.830 [INFO][5221] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.846 [INFO][5221] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.2/26] block=192.168.62.0/26 handle="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.847 [INFO][5221] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.2/26] handle="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" host="ip-172-31-23-128" Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.847 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:27.924570 containerd[2073]: 2025-01-17 12:02:27.848 [INFO][5221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.2/26] IPv6=[] ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" HandleID="k8s-pod-network.b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.927390 containerd[2073]: 2025-01-17 12:02:27.859 [INFO][5199] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5a7d8e3-a980-4aa2-824f-1fccea03f32f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"", Pod:"calico-apiserver-679485b9d8-rm8d6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c0d552291e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:27.927390 containerd[2073]: 2025-01-17 12:02:27.861 [INFO][5199] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.2/32] ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.927390 containerd[2073]: 2025-01-17 12:02:27.863 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c0d552291e ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.927390 containerd[2073]: 2025-01-17 12:02:27.885 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.927390 containerd[2073]: 2025-01-17 12:02:27.886 [INFO][5199] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5a7d8e3-a980-4aa2-824f-1fccea03f32f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4", Pod:"calico-apiserver-679485b9d8-rm8d6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c0d552291e", MAC:"92:2a:9a:69:d7:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:27.927390 containerd[2073]: 2025-01-17 12:02:27.921 [INFO][5199] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-rm8d6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:27.986159 systemd-networkd[1608]: cali44e718082c9: Link UP Jan 17 12:02:27.989750 systemd-networkd[1608]: cali44e718082c9: Gained carrier Jan 17 12:02:28.025840 containerd[2073]: time="2025-01-17T12:02:28.023746421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:28.025840 containerd[2073]: time="2025-01-17T12:02:28.024144029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:28.025840 containerd[2073]: time="2025-01-17T12:02:28.024181457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:28.025840 containerd[2073]: time="2025-01-17T12:02:28.024620993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.617 [INFO][5210] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0 csi-node-driver- calico-system 4460c615-c8f8-4ea0-acd3-2a02aa651b6c 850 0 2025-01-17 12:02:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-128 csi-node-driver-q7b84 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali44e718082c9 [] []}} ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.617 [INFO][5210] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.726 [INFO][5225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" HandleID="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.768 [INFO][5225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" HandleID="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-128", "pod":"csi-node-driver-q7b84", "timestamp":"2025-01-17 12:02:27.726399896 +0000 UTC"}, Hostname:"ip-172-31-23-128", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.768 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.848 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.848 [INFO][5225] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-128' Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.856 [INFO][5225] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.872 [INFO][5225] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.898 [INFO][5225] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.920 [INFO][5225] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.931 [INFO][5225] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.931 [INFO][5225] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.935 [INFO][5225] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09 Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.944 [INFO][5225] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.963 [INFO][5225] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.3/26] block=192.168.62.0/26 handle="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.964 [INFO][5225] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.3/26] handle="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" host="ip-172-31-23-128" Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.964 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:28.031942 containerd[2073]: 2025-01-17 12:02:27.964 [INFO][5225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.3/26] IPv6=[] ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" HandleID="k8s-pod-network.2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.034634 containerd[2073]: 2025-01-17 12:02:27.971 [INFO][5210] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4460c615-c8f8-4ea0-acd3-2a02aa651b6c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"", Pod:"csi-node-driver-q7b84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44e718082c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:28.034634 containerd[2073]: 2025-01-17 12:02:27.971 [INFO][5210] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.3/32] ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.034634 containerd[2073]: 2025-01-17 12:02:27.971 [INFO][5210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44e718082c9 ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.034634 containerd[2073]: 2025-01-17 12:02:27.989 [INFO][5210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.034634 containerd[2073]: 2025-01-17 12:02:27.997 [INFO][5210] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4460c615-c8f8-4ea0-acd3-2a02aa651b6c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09", Pod:"csi-node-driver-q7b84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44e718082c9", MAC:"22:bc:de:0c:7d:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:28.034634 containerd[2073]: 2025-01-17 12:02:28.022 [INFO][5210] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09" Namespace="calico-system" Pod="csi-node-driver-q7b84" WorkloadEndpoint="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:28.102823 containerd[2073]: time="2025-01-17T12:02:28.102501234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:28.105132 containerd[2073]: time="2025-01-17T12:02:28.102829434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:28.105132 containerd[2073]: time="2025-01-17T12:02:28.104662350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:28.106011 containerd[2073]: time="2025-01-17T12:02:28.105310818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:28.172337 containerd[2073]: time="2025-01-17T12:02:28.172188906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-rm8d6,Uid:f5a7d8e3-a980-4aa2-824f-1fccea03f32f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4\"" Jan 17 12:02:28.185335 containerd[2073]: time="2025-01-17T12:02:28.185002086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:02:28.199297 containerd[2073]: time="2025-01-17T12:02:28.199249050Z" level=info msg="StopPodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\"" Jan 17 12:02:28.199786 containerd[2073]: time="2025-01-17T12:02:28.199599774Z" level=info msg="StopPodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\"" Jan 17 12:02:28.231613 containerd[2073]: time="2025-01-17T12:02:28.231517182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7b84,Uid:4460c615-c8f8-4ea0-acd3-2a02aa651b6c,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09\"" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.342 [INFO][5376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.343 [INFO][5376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" iface="eth0" netns="/var/run/netns/cni-b3c2b1ed-e305-3723-3237-903b6e6ff299" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.343 [INFO][5376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" iface="eth0" netns="/var/run/netns/cni-b3c2b1ed-e305-3723-3237-903b6e6ff299" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.344 [INFO][5376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" iface="eth0" netns="/var/run/netns/cni-b3c2b1ed-e305-3723-3237-903b6e6ff299" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.344 [INFO][5376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.344 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.406 [INFO][5389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.406 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.406 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.424 [WARNING][5389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.425 [INFO][5389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.428 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:28.453115 containerd[2073]: 2025-01-17 12:02:28.440 [INFO][5376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:28.457661 containerd[2073]: time="2025-01-17T12:02:28.455548183Z" level=info msg="TearDown network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" successfully" Jan 17 12:02:28.457661 containerd[2073]: time="2025-01-17T12:02:28.456301219Z" level=info msg="StopPodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" returns successfully" Jan 17 12:02:28.461274 containerd[2073]: time="2025-01-17T12:02:28.459987212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d996f57-4zkw6,Uid:cc92581e-93a8-44ff-8991-2aa4dd9c2b83,Namespace:calico-system,Attempt:1,}" Jan 17 12:02:28.463759 systemd[1]: run-netns-cni\x2db3c2b1ed\x2de305\x2d3723\x2d3237\x2d903b6e6ff299.mount: Deactivated successfully. Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.365 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.366 [INFO][5380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" iface="eth0" netns="/var/run/netns/cni-c7d3cf7c-6efd-af76-6e63-969a13a87c57" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.366 [INFO][5380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" iface="eth0" netns="/var/run/netns/cni-c7d3cf7c-6efd-af76-6e63-969a13a87c57" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.369 [INFO][5380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" iface="eth0" netns="/var/run/netns/cni-c7d3cf7c-6efd-af76-6e63-969a13a87c57" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.370 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.370 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.428 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.429 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.431 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.468 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.469 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.475 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:28.484708 containerd[2073]: 2025-01-17 12:02:28.478 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:28.488068 containerd[2073]: time="2025-01-17T12:02:28.485560712Z" level=info msg="TearDown network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" successfully" Jan 17 12:02:28.488068 containerd[2073]: time="2025-01-17T12:02:28.485635412Z" level=info msg="StopPodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" returns successfully" Jan 17 12:02:28.488068 containerd[2073]: time="2025-01-17T12:02:28.486495416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgfzb,Uid:732ab0f9-098f-4bef-81b9-1f8a3bd9d354,Namespace:kube-system,Attempt:1,}" Jan 17 12:02:28.492619 systemd[1]: run-netns-cni\x2dc7d3cf7c\x2d6efd\x2daf76\x2d6e63\x2d969a13a87c57.mount: Deactivated successfully. Jan 17 12:02:28.683148 systemd-networkd[1608]: cali57f53f38356: Gained IPv6LL Jan 17 12:02:28.871026 systemd-networkd[1608]: calic9df17a21b1: Link UP Jan 17 12:02:28.875226 systemd-networkd[1608]: calic9df17a21b1: Gained carrier Jan 17 12:02:28.941196 systemd-networkd[1608]: cali8c0d552291e: Gained IPv6LL Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.606 [INFO][5404] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0 calico-kube-controllers-598d996f57- calico-system cc92581e-93a8-44ff-8991-2aa4dd9c2b83 871 0 2025-01-17 12:02:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:598d996f57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-128 calico-kube-controllers-598d996f57-4zkw6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic9df17a21b1 [] []}} ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.607 [INFO][5404] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.703 [INFO][5425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" HandleID="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.733 [INFO][5425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" HandleID="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000448fb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-128", "pod":"calico-kube-controllers-598d996f57-4zkw6", "timestamp":"2025-01-17 12:02:28.703863357 +0000 UTC"}, Hostname:"ip-172-31-23-128", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.733 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.734 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.734 [INFO][5425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-128' Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.739 [INFO][5425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.746 [INFO][5425] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.757 [INFO][5425] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.761 [INFO][5425] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.769 [INFO][5425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.769 [INFO][5425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.780 [INFO][5425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.811 [INFO][5425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.844 [INFO][5425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.4/26] block=192.168.62.0/26 handle="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.847 [INFO][5425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.4/26] handle="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" host="ip-172-31-23-128" Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.847 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:28.957788 containerd[2073]: 2025-01-17 12:02:28.847 [INFO][5425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.4/26] IPv6=[] ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" HandleID="k8s-pod-network.6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.964906 containerd[2073]: 2025-01-17 12:02:28.857 [INFO][5404] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0", GenerateName:"calico-kube-controllers-598d996f57-", Namespace:"calico-system", SelfLink:"", UID:"cc92581e-93a8-44ff-8991-2aa4dd9c2b83", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d996f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"", Pod:"calico-kube-controllers-598d996f57-4zkw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9df17a21b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:28.964906 containerd[2073]: 2025-01-17 12:02:28.857 [INFO][5404] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.4/32] ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.964906 containerd[2073]: 2025-01-17 12:02:28.857 [INFO][5404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9df17a21b1 ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.964906 containerd[2073]: 2025-01-17 12:02:28.873 [INFO][5404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:28.964906 containerd[2073]: 2025-01-17 12:02:28.881 [INFO][5404] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0", GenerateName:"calico-kube-controllers-598d996f57-", Namespace:"calico-system", SelfLink:"", UID:"cc92581e-93a8-44ff-8991-2aa4dd9c2b83", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d996f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d", Pod:"calico-kube-controllers-598d996f57-4zkw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9df17a21b1", MAC:"da:71:91:3b:8f:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:28.964906 containerd[2073]: 2025-01-17 12:02:28.948 [INFO][5404] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d" Namespace="calico-system" Pod="calico-kube-controllers-598d996f57-4zkw6" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:29.017639 containerd[2073]: time="2025-01-17T12:02:29.017232486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.018855 containerd[2073]: time="2025-01-17T12:02:29.017354034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.019127 containerd[2073]: time="2025-01-17T12:02:29.018783078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.021361 containerd[2073]: time="2025-01-17T12:02:29.020835534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.027413 systemd-networkd[1608]: cali307390a385b: Link UP Jan 17 12:02:29.030217 systemd-networkd[1608]: cali307390a385b: Gained carrier Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.626 [INFO][5412] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0 coredns-76f75df574- kube-system 732ab0f9-098f-4bef-81b9-1f8a3bd9d354 872 0 2025-01-17 12:01:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-128 coredns-76f75df574-lgfzb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali307390a385b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.628 [INFO][5412] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.734 [INFO][5429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" HandleID="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.757 [INFO][5429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" HandleID="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-128", "pod":"coredns-76f75df574-lgfzb", "timestamp":"2025-01-17 12:02:28.734137497 +0000 UTC"}, Hostname:"ip-172-31-23-128", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.757 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.847 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.847 [INFO][5429] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-128' Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.850 [INFO][5429] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.886 [INFO][5429] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.928 [INFO][5429] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.944 [INFO][5429] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.953 [INFO][5429] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.953 [INFO][5429] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.961 [INFO][5429] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492 Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.975 [INFO][5429] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.997 [INFO][5429] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.5/26] block=192.168.62.0/26 handle="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.999 [INFO][5429] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.5/26] handle="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" host="ip-172-31-23-128" Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.999 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:29.074210 containerd[2073]: 2025-01-17 12:02:28.999 [INFO][5429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.5/26] IPv6=[] ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" HandleID="k8s-pod-network.c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.076475 containerd[2073]: 2025-01-17 12:02:29.018 [INFO][5412] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"732ab0f9-098f-4bef-81b9-1f8a3bd9d354", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"", Pod:"coredns-76f75df574-lgfzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali307390a385b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:29.076475 containerd[2073]: 2025-01-17 12:02:29.018 [INFO][5412] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.5/32] ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.076475 containerd[2073]: 2025-01-17 12:02:29.018 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali307390a385b ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.076475 containerd[2073]: 2025-01-17 12:02:29.033 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.076475 containerd[2073]: 2025-01-17 12:02:29.035 [INFO][5412] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"732ab0f9-098f-4bef-81b9-1f8a3bd9d354", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492", Pod:"coredns-76f75df574-lgfzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali307390a385b", MAC:"9a:8b:f5:e0:28:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:29.076475 containerd[2073]: 2025-01-17 12:02:29.059 [INFO][5412] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492" Namespace="kube-system" Pod="coredns-76f75df574-lgfzb" WorkloadEndpoint="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:29.161526 containerd[2073]: time="2025-01-17T12:02:29.161286187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.161526 containerd[2073]: time="2025-01-17T12:02:29.161385391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.163115 containerd[2073]: time="2025-01-17T12:02:29.161879611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.163115 containerd[2073]: time="2025-01-17T12:02:29.162089695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.184105 containerd[2073]: time="2025-01-17T12:02:29.183976939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d996f57-4zkw6,Uid:cc92581e-93a8-44ff-8991-2aa4dd9c2b83,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d\"" Jan 17 12:02:29.195106 systemd-networkd[1608]: cali44e718082c9: Gained IPv6LL Jan 17 12:02:29.273925 containerd[2073]: time="2025-01-17T12:02:29.273852740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-lgfzb,Uid:732ab0f9-098f-4bef-81b9-1f8a3bd9d354,Namespace:kube-system,Attempt:1,} returns sandbox id \"c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492\"" Jan 17 12:02:29.283121 containerd[2073]: time="2025-01-17T12:02:29.282750140Z" level=info msg="CreateContainer within sandbox \"c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:02:29.314073 containerd[2073]: time="2025-01-17T12:02:29.313996160Z" level=info msg="CreateContainer within sandbox \"c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94e718cfe2e555cf526bc2b3f2e39dea5089a584f2cafac88347b6cfb8693986\"" Jan 17 12:02:29.314853 containerd[2073]: time="2025-01-17T12:02:29.314787416Z" level=info msg="StartContainer for \"94e718cfe2e555cf526bc2b3f2e39dea5089a584f2cafac88347b6cfb8693986\"" Jan 17 12:02:29.418815 containerd[2073]: time="2025-01-17T12:02:29.416580140Z" level=info msg="StartContainer for \"94e718cfe2e555cf526bc2b3f2e39dea5089a584f2cafac88347b6cfb8693986\" returns successfully" Jan 17 12:02:29.811461 kubelet[3567]: I0117 12:02:29.811133 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-lgfzb" podStartSLOduration=41.81092797 podStartE2EDuration="41.81092797s" podCreationTimestamp="2025-01-17 12:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:29.77308069 +0000 UTC m=+55.886743263" watchObservedRunningTime="2025-01-17 12:02:29.81092797 +0000 UTC m=+55.924590531" Jan 17 12:02:30.202728 containerd[2073]: time="2025-01-17T12:02:30.198336200Z" level=info msg="StopPodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\"" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.336 [INFO][5604] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.336 [INFO][5604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" iface="eth0" netns="/var/run/netns/cni-2ec16d8c-ce34-d268-a8c2-eaa9653af405" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.338 [INFO][5604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" iface="eth0" netns="/var/run/netns/cni-2ec16d8c-ce34-d268-a8c2-eaa9653af405" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.339 [INFO][5604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" iface="eth0" netns="/var/run/netns/cni-2ec16d8c-ce34-d268-a8c2-eaa9653af405" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.339 [INFO][5604] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.339 [INFO][5604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.425 [INFO][5610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.426 [INFO][5610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.426 [INFO][5610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.442 [WARNING][5610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.443 [INFO][5610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.447 [INFO][5610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:30.462343 containerd[2073]: 2025-01-17 12:02:30.454 [INFO][5604] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:30.462343 containerd[2073]: time="2025-01-17T12:02:30.461482341Z" level=info msg="TearDown network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" successfully" Jan 17 12:02:30.462343 containerd[2073]: time="2025-01-17T12:02:30.461558553Z" level=info msg="StopPodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" returns successfully" Jan 17 12:02:30.469853 containerd[2073]: time="2025-01-17T12:02:30.467747241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-dxmxx,Uid:6079291c-981f-420b-bb70-d9a6b4850e0e,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:02:30.474345 systemd[1]: run-netns-cni\x2d2ec16d8c\x2dce34\x2dd268\x2da8c2\x2deaa9653af405.mount: Deactivated successfully. Jan 17 12:02:30.603149 systemd-networkd[1608]: cali307390a385b: Gained IPv6LL Jan 17 12:02:30.664339 systemd[1]: Started sshd@9-172.31.23.128:22-139.178.68.195:57118.service - OpenSSH per-connection server daemon (139.178.68.195:57118). Jan 17 12:02:30.732293 systemd-networkd[1608]: calic9df17a21b1: Gained IPv6LL Jan 17 12:02:30.893099 systemd-networkd[1608]: cali877526013aa: Link UP Jan 17 12:02:30.898504 systemd-networkd[1608]: cali877526013aa: Gained carrier Jan 17 12:02:30.904317 sshd[5629]: Accepted publickey for core from 139.178.68.195 port 57118 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:30.910841 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:30.936454 systemd-logind[2042]: New session 10 of user core. Jan 17 12:02:30.943829 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.624 [INFO][5618] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0 calico-apiserver-679485b9d8- calico-apiserver 6079291c-981f-420b-bb70-d9a6b4850e0e 902 0 2025-01-17 12:02:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:679485b9d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-128 calico-apiserver-679485b9d8-dxmxx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali877526013aa [] []}} ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.624 [INFO][5618] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.776 [INFO][5630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" HandleID="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.798 [INFO][5630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" HandleID="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032d210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-128", "pod":"calico-apiserver-679485b9d8-dxmxx", "timestamp":"2025-01-17 12:02:30.776356991 +0000 UTC"}, Hostname:"ip-172-31-23-128", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.798 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.798 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.798 [INFO][5630] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-128' Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.801 [INFO][5630] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.815 [INFO][5630] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.825 [INFO][5630] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.830 [INFO][5630] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.838 [INFO][5630] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.838 [INFO][5630] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.843 [INFO][5630] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9 Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.852 [INFO][5630] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.871 [INFO][5630] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.6/26] block=192.168.62.0/26 handle="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.872 [INFO][5630] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.6/26] handle="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" host="ip-172-31-23-128" Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.872 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:30.963511 containerd[2073]: 2025-01-17 12:02:30.872 [INFO][5630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.6/26] IPv6=[] ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" HandleID="k8s-pod-network.992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.965971 containerd[2073]: 2025-01-17 12:02:30.878 [INFO][5618] cni-plugin/k8s.go 386: Populated endpoint ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6079291c-981f-420b-bb70-d9a6b4850e0e", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"", Pod:"calico-apiserver-679485b9d8-dxmxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali877526013aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:30.965971 containerd[2073]: 2025-01-17 12:02:30.879 [INFO][5618] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.6/32] ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.965971 containerd[2073]: 2025-01-17 12:02:30.879 [INFO][5618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali877526013aa ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.965971 containerd[2073]: 2025-01-17 12:02:30.899 [INFO][5618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:30.965971 containerd[2073]: 2025-01-17 12:02:30.901 [INFO][5618] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6079291c-981f-420b-bb70-d9a6b4850e0e", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9", Pod:"calico-apiserver-679485b9d8-dxmxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali877526013aa", MAC:"1a:73:94:1a:f1:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:30.965971 containerd[2073]: 2025-01-17 12:02:30.945 [INFO][5618] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9" Namespace="calico-apiserver" Pod="calico-apiserver-679485b9d8-dxmxx" WorkloadEndpoint="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:31.057661 containerd[2073]: time="2025-01-17T12:02:31.057302492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:31.057948 containerd[2073]: time="2025-01-17T12:02:31.057493484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:31.057948 containerd[2073]: time="2025-01-17T12:02:31.057532136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:31.062227 containerd[2073]: time="2025-01-17T12:02:31.060342008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:31.364913 containerd[2073]: time="2025-01-17T12:02:31.363548134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679485b9d8-dxmxx,Uid:6079291c-981f-420b-bb70-d9a6b4850e0e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9\"" Jan 17 12:02:31.470055 sshd[5629]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:31.479828 systemd[1]: run-containerd-runc-k8s.io-992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9-runc.7hNUON.mount: Deactivated successfully. Jan 17 12:02:31.505857 systemd[1]: Started sshd@10-172.31.23.128:22-139.178.68.195:57132.service - OpenSSH per-connection server daemon (139.178.68.195:57132). Jan 17 12:02:31.509526 systemd[1]: sshd@9-172.31.23.128:22-139.178.68.195:57118.service: Deactivated successfully. Jan 17 12:02:31.532187 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:02:31.539624 systemd-logind[2042]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:02:31.555928 systemd-logind[2042]: Removed session 10. Jan 17 12:02:31.763816 sshd[5710]: Accepted publickey for core from 139.178.68.195 port 57132 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:31.769092 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.796081 systemd-logind[2042]: New session 11 of user core. Jan 17 12:02:31.804551 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:02:32.386463 sshd[5710]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:32.402337 systemd[1]: sshd@10-172.31.23.128:22-139.178.68.195:57132.service: Deactivated successfully. Jan 17 12:02:32.432062 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:02:32.439888 systemd-logind[2042]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:02:32.456226 systemd[1]: Started sshd@11-172.31.23.128:22-139.178.68.195:57146.service - OpenSSH per-connection server daemon (139.178.68.195:57146). Jan 17 12:02:32.459058 systemd-logind[2042]: Removed session 11. Jan 17 12:02:32.663055 sshd[5725]: Accepted publickey for core from 139.178.68.195 port 57146 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:32.670351 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:32.694150 systemd-logind[2042]: New session 12 of user core. Jan 17 12:02:32.703132 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:02:32.780082 systemd-networkd[1608]: cali877526013aa: Gained IPv6LL Jan 17 12:02:32.918317 containerd[2073]: time="2025-01-17T12:02:32.916915058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:32.920931 containerd[2073]: time="2025-01-17T12:02:32.920845286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:02:32.923726 containerd[2073]: time="2025-01-17T12:02:32.922995938Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:32.930541 containerd[2073]: time="2025-01-17T12:02:32.930470222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:32.933946 containerd[2073]: time="2025-01-17T12:02:32.933844862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.748778852s" Jan 17 12:02:32.933946 containerd[2073]: time="2025-01-17T12:02:32.933916790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:02:32.938143 containerd[2073]: time="2025-01-17T12:02:32.936844730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:02:32.944624 containerd[2073]: time="2025-01-17T12:02:32.943651082Z" level=info msg="CreateContainer within sandbox \"b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:02:32.985158 containerd[2073]: time="2025-01-17T12:02:32.984936314Z" level=info msg="CreateContainer within sandbox \"b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f5f117f778a48caf777752b1b1d9e335947176f7bb34ed94d87791da45e3ac4c\"" Jan 17 12:02:32.990895 containerd[2073]: time="2025-01-17T12:02:32.988650014Z" level=info msg="StartContainer for \"f5f117f778a48caf777752b1b1d9e335947176f7bb34ed94d87791da45e3ac4c\"" Jan 17 12:02:33.104513 sshd[5725]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:33.111257 systemd[1]: run-containerd-runc-k8s.io-f5f117f778a48caf777752b1b1d9e335947176f7bb34ed94d87791da45e3ac4c-runc.zwgAfP.mount: Deactivated successfully. Jan 17 12:02:33.131318 systemd[1]: sshd@11-172.31.23.128:22-139.178.68.195:57146.service: Deactivated successfully. Jan 17 12:02:33.141225 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:02:33.143503 systemd-logind[2042]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:02:33.149433 systemd-logind[2042]: Removed session 12. Jan 17 12:02:33.204865 containerd[2073]: time="2025-01-17T12:02:33.204673823Z" level=info msg="StartContainer for \"f5f117f778a48caf777752b1b1d9e335947176f7bb34ed94d87791da45e3ac4c\" returns successfully" Jan 17 12:02:33.819778 kubelet[3567]: I0117 12:02:33.818845 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-679485b9d8-rm8d6" podStartSLOduration=29.058719402 podStartE2EDuration="33.818517182s" podCreationTimestamp="2025-01-17 12:02:00 +0000 UTC" firstStartedPulling="2025-01-17 12:02:28.17493441 +0000 UTC m=+54.288596947" lastFinishedPulling="2025-01-17 12:02:32.934732178 +0000 UTC m=+59.048394727" observedRunningTime="2025-01-17 12:02:33.818022494 +0000 UTC m=+59.931685139" watchObservedRunningTime="2025-01-17 12:02:33.818517182 +0000 UTC m=+59.932179731" Jan 17 12:02:34.175124 containerd[2073]: time="2025-01-17T12:02:34.173945688Z" level=info msg="StopPodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\"" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.333 [WARNING][5796] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0", GenerateName:"calico-kube-controllers-598d996f57-", Namespace:"calico-system", SelfLink:"", UID:"cc92581e-93a8-44ff-8991-2aa4dd9c2b83", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d996f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d", Pod:"calico-kube-controllers-598d996f57-4zkw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9df17a21b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.334 [INFO][5796] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.334 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" iface="eth0" netns="" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.334 [INFO][5796] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.334 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.426 [INFO][5803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.428 [INFO][5803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.428 [INFO][5803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.456 [WARNING][5803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.456 [INFO][5803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.463 [INFO][5803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:34.477084 containerd[2073]: 2025-01-17 12:02:34.471 [INFO][5796] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.480422 containerd[2073]: time="2025-01-17T12:02:34.477036829Z" level=info msg="TearDown network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" successfully" Jan 17 12:02:34.480422 containerd[2073]: time="2025-01-17T12:02:34.477181741Z" level=info msg="StopPodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" returns successfully" Jan 17 12:02:34.480422 containerd[2073]: time="2025-01-17T12:02:34.478082833Z" level=info msg="RemovePodSandbox for \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\"" Jan 17 12:02:34.480422 containerd[2073]: time="2025-01-17T12:02:34.478167553Z" level=info msg="Forcibly stopping sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\"" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.640 [WARNING][5822] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0", GenerateName:"calico-kube-controllers-598d996f57-", Namespace:"calico-system", SelfLink:"", UID:"cc92581e-93a8-44ff-8991-2aa4dd9c2b83", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d996f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d", Pod:"calico-kube-controllers-598d996f57-4zkw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9df17a21b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.641 [INFO][5822] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.641 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" iface="eth0" netns="" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.641 [INFO][5822] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.641 [INFO][5822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.692 [INFO][5828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.692 [INFO][5828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.692 [INFO][5828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.731 [WARNING][5828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.732 [INFO][5828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" HandleID="k8s-pod-network.c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Workload="ip--172--31--23--128-k8s-calico--kube--controllers--598d996f57--4zkw6-eth0" Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.737 [INFO][5828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:34.752284 containerd[2073]: 2025-01-17 12:02:34.740 [INFO][5822] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd" Jan 17 12:02:34.752284 containerd[2073]: time="2025-01-17T12:02:34.749604099Z" level=info msg="TearDown network for sandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" successfully" Jan 17 12:02:34.770996 containerd[2073]: time="2025-01-17T12:02:34.770906943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:02:34.771264 containerd[2073]: time="2025-01-17T12:02:34.771028167Z" level=info msg="RemovePodSandbox \"c4d1d31bf4bc706670674343ecf0b4748889d8c55d7694c4c5c18c449b6c74fd\" returns successfully" Jan 17 12:02:34.775927 containerd[2073]: time="2025-01-17T12:02:34.775854075Z" level=info msg="StopPodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\"" Jan 17 12:02:34.785746 kubelet[3567]: I0117 12:02:34.785685 3567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.012 [WARNING][5846] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4460c615-c8f8-4ea0-acd3-2a02aa651b6c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09", Pod:"csi-node-driver-q7b84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44e718082c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.018 [INFO][5846] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.020 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" iface="eth0" netns="" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.021 [INFO][5846] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.021 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.133 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.134 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.134 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.150 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.151 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.155 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:35.168745 containerd[2073]: 2025-01-17 12:02:35.160 [INFO][5846] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.168745 containerd[2073]: time="2025-01-17T12:02:35.168573745Z" level=info msg="TearDown network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" successfully" Jan 17 12:02:35.168745 containerd[2073]: time="2025-01-17T12:02:35.168612613Z" level=info msg="StopPodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" returns successfully" Jan 17 12:02:35.170071 containerd[2073]: time="2025-01-17T12:02:35.169548625Z" level=info msg="RemovePodSandbox for \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\"" Jan 17 12:02:35.170071 containerd[2073]: time="2025-01-17T12:02:35.169602277Z" level=info msg="Forcibly stopping sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\"" Jan 17 12:02:35.286022 ntpd[2022]: Listen normally on 6 vxlan.calico 192.168.62.0:123 Jan 17 12:02:35.287170 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 6 vxlan.calico 192.168.62.0:123 Jan 17 12:02:35.287170 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 7 vxlan.calico [fe80::6462:c4ff:fe61:8efa%4]:123 Jan 17 12:02:35.287170 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 8 cali57f53f38356 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:02:35.287170 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 9 cali8c0d552291e [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:02:35.287170 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 10 cali44e718082c9 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:02:35.286173 ntpd[2022]: Listen normally on 7 vxlan.calico [fe80::6462:c4ff:fe61:8efa%4]:123 Jan 17 12:02:35.287532 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 11 calic9df17a21b1 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:02:35.287532 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 12 cali307390a385b [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:02:35.287532 ntpd[2022]: 17 Jan 12:02:35 ntpd[2022]: Listen normally on 13 cali877526013aa [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:02:35.286309 ntpd[2022]: Listen normally on 8 cali57f53f38356 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:02:35.286385 ntpd[2022]: Listen normally on 9 cali8c0d552291e [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:02:35.286455 ntpd[2022]: Listen normally on 10 cali44e718082c9 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:02:35.290078 containerd[2073]: time="2025-01-17T12:02:35.288448753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:35.287207 ntpd[2022]: Listen normally on 11 calic9df17a21b1 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:02:35.287322 ntpd[2022]: Listen normally on 12 cali307390a385b [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:02:35.287395 ntpd[2022]: Listen normally on 13 cali877526013aa [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:02:35.291396 containerd[2073]: time="2025-01-17T12:02:35.290892277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:02:35.295304 containerd[2073]: time="2025-01-17T12:02:35.294582505Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:35.318554 containerd[2073]: time="2025-01-17T12:02:35.309164150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:35.323165 containerd[2073]: time="2025-01-17T12:02:35.322521434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.385604668s" Jan 17 12:02:35.323371 containerd[2073]: time="2025-01-17T12:02:35.323253494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:02:35.327479 containerd[2073]: time="2025-01-17T12:02:35.327420602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:02:35.334077 containerd[2073]: time="2025-01-17T12:02:35.334023470Z" level=info msg="CreateContainer within sandbox \"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:02:35.388676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279459244.mount: Deactivated successfully. Jan 17 12:02:35.399427 containerd[2073]: time="2025-01-17T12:02:35.399158390Z" level=info msg="CreateContainer within sandbox \"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5702c1c57cfcc9c2bd5610a9cfb0640630bc584ecd17f5eae90852a1e9292eef\"" Jan 17 12:02:35.404802 containerd[2073]: time="2025-01-17T12:02:35.403096454Z" level=info msg="StartContainer for \"5702c1c57cfcc9c2bd5610a9cfb0640630bc584ecd17f5eae90852a1e9292eef\"" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.296 [WARNING][5875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4460c615-c8f8-4ea0-acd3-2a02aa651b6c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09", Pod:"csi-node-driver-q7b84", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44e718082c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.297 [INFO][5875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.297 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" iface="eth0" netns="" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.297 [INFO][5875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.297 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.397 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.397 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.397 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.423 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.424 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" HandleID="k8s-pod-network.fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Workload="ip--172--31--23--128-k8s-csi--node--driver--q7b84-eth0" Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.433 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:35.451398 containerd[2073]: 2025-01-17 12:02:35.442 [INFO][5875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a" Jan 17 12:02:35.452878 containerd[2073]: time="2025-01-17T12:02:35.452832290Z" level=info msg="TearDown network for sandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" successfully" Jan 17 12:02:35.460513 containerd[2073]: time="2025-01-17T12:02:35.460456418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:02:35.460739 containerd[2073]: time="2025-01-17T12:02:35.460705094Z" level=info msg="RemovePodSandbox \"fd7a225a1915a18d8608cf2c07ac8e24978e473646ee9c3fb76c3e63c7636a3a\" returns successfully" Jan 17 12:02:35.461576 containerd[2073]: time="2025-01-17T12:02:35.461533934Z" level=info msg="StopPodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\"" Jan 17 12:02:35.563418 containerd[2073]: time="2025-01-17T12:02:35.562669455Z" level=info msg="StartContainer for \"5702c1c57cfcc9c2bd5610a9cfb0640630bc584ecd17f5eae90852a1e9292eef\" returns successfully" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.588 [WARNING][5919] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6079291c-981f-420b-bb70-d9a6b4850e0e", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9", Pod:"calico-apiserver-679485b9d8-dxmxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali877526013aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.588 [INFO][5919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.588 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" iface="eth0" netns="" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.588 [INFO][5919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.588 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.632 [INFO][5941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.633 [INFO][5941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.633 [INFO][5941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.647 [WARNING][5941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.647 [INFO][5941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.650 [INFO][5941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:35.656565 containerd[2073]: 2025-01-17 12:02:35.653 [INFO][5919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.657456 containerd[2073]: time="2025-01-17T12:02:35.657277179Z" level=info msg="TearDown network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" successfully" Jan 17 12:02:35.657456 containerd[2073]: time="2025-01-17T12:02:35.657391167Z" level=info msg="StopPodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" returns successfully" Jan 17 12:02:35.658560 containerd[2073]: time="2025-01-17T12:02:35.658176231Z" level=info msg="RemovePodSandbox for \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\"" Jan 17 12:02:35.658560 containerd[2073]: time="2025-01-17T12:02:35.658230987Z" level=info msg="Forcibly stopping sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\"" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.734 [WARNING][5959] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"6079291c-981f-420b-bb70-d9a6b4850e0e", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9", Pod:"calico-apiserver-679485b9d8-dxmxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali877526013aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.734 [INFO][5959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.734 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" iface="eth0" netns="" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.734 [INFO][5959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.734 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.780 [INFO][5965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.780 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.780 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.795 [WARNING][5965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.795 [INFO][5965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" HandleID="k8s-pod-network.eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--dxmxx-eth0" Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.798 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:35.804298 containerd[2073]: 2025-01-17 12:02:35.801 [INFO][5959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346" Jan 17 12:02:35.805618 containerd[2073]: time="2025-01-17T12:02:35.804356992Z" level=info msg="TearDown network for sandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" successfully" Jan 17 12:02:35.810965 containerd[2073]: time="2025-01-17T12:02:35.810883528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:02:35.811144 containerd[2073]: time="2025-01-17T12:02:35.810984112Z" level=info msg="RemovePodSandbox \"eb77bad28365f9643fa5ef6ef4f3a48468aa94e811da8e5b62745efccfe31346\" returns successfully" Jan 17 12:02:35.811702 containerd[2073]: time="2025-01-17T12:02:35.811656568Z" level=info msg="StopPodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\"" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.885 [WARNING][5983] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"732ab0f9-098f-4bef-81b9-1f8a3bd9d354", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492", Pod:"coredns-76f75df574-lgfzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali307390a385b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.885 [INFO][5983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.885 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" iface="eth0" netns="" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.885 [INFO][5983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.885 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.933 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.933 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.933 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.946 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.946 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.950 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:35.955643 containerd[2073]: 2025-01-17 12:02:35.952 [INFO][5983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:35.956754 containerd[2073]: time="2025-01-17T12:02:35.955673333Z" level=info msg="TearDown network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" successfully" Jan 17 12:02:35.956754 containerd[2073]: time="2025-01-17T12:02:35.955710917Z" level=info msg="StopPodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" returns successfully" Jan 17 12:02:35.956754 containerd[2073]: time="2025-01-17T12:02:35.957125357Z" level=info msg="RemovePodSandbox for \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\"" Jan 17 12:02:35.956754 containerd[2073]: time="2025-01-17T12:02:35.957173513Z" level=info msg="Forcibly stopping sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\"" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.037 [WARNING][6008] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"732ab0f9-098f-4bef-81b9-1f8a3bd9d354", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"c2d11095c2a3a36b71014f9a82731b8906cc24b7be4bcff5714f3370f2871492", Pod:"coredns-76f75df574-lgfzb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali307390a385b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.037 [INFO][6008] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.037 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" iface="eth0" netns="" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.037 [INFO][6008] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.038 [INFO][6008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.087 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.087 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.087 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.101 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.101 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" HandleID="k8s-pod-network.76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--lgfzb-eth0" Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.104 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:36.110183 containerd[2073]: 2025-01-17 12:02:36.107 [INFO][6008] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e" Jan 17 12:02:36.110183 containerd[2073]: time="2025-01-17T12:02:36.110032550Z" level=info msg="TearDown network for sandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" successfully" Jan 17 12:02:36.118717 containerd[2073]: time="2025-01-17T12:02:36.118625162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:02:36.119992 containerd[2073]: time="2025-01-17T12:02:36.118721882Z" level=info msg="RemovePodSandbox \"76c27906f1c98ff2926919bae0f16186357e8327848597c660ac5727c30b515e\" returns successfully" Jan 17 12:02:36.121842 containerd[2073]: time="2025-01-17T12:02:36.121078610Z" level=info msg="StopPodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\"" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.213 [WARNING][6041] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5a7d8e3-a980-4aa2-824f-1fccea03f32f", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4", Pod:"calico-apiserver-679485b9d8-rm8d6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c0d552291e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.214 [INFO][6041] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.214 [INFO][6041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" iface="eth0" netns="" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.214 [INFO][6041] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.214 [INFO][6041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.262 [INFO][6047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.262 [INFO][6047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.262 [INFO][6047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.275 [WARNING][6047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.275 [INFO][6047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.277 [INFO][6047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:36.283752 containerd[2073]: 2025-01-17 12:02:36.280 [INFO][6041] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.285398 containerd[2073]: time="2025-01-17T12:02:36.283714706Z" level=info msg="TearDown network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" successfully" Jan 17 12:02:36.285398 containerd[2073]: time="2025-01-17T12:02:36.284584010Z" level=info msg="StopPodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" returns successfully" Jan 17 12:02:36.286113 containerd[2073]: time="2025-01-17T12:02:36.285942866Z" level=info msg="RemovePodSandbox for \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\"" Jan 17 12:02:36.286113 containerd[2073]: time="2025-01-17T12:02:36.286000442Z" level=info msg="Forcibly stopping sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\"" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.360 [WARNING][6065] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0", GenerateName:"calico-apiserver-679485b9d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5a7d8e3-a980-4aa2-824f-1fccea03f32f", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679485b9d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"b4830ba26ba948197eb4121a54fc7ea4716a7ded7f1afe2f1711828bfcc6cdf4", Pod:"calico-apiserver-679485b9d8-rm8d6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c0d552291e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.361 [INFO][6065] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.361 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" iface="eth0" netns="" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.361 [INFO][6065] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.361 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.401 [INFO][6071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.401 [INFO][6071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.401 [INFO][6071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.413 [WARNING][6071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.413 [INFO][6071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" HandleID="k8s-pod-network.caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Workload="ip--172--31--23--128-k8s-calico--apiserver--679485b9d8--rm8d6-eth0" Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.416 [INFO][6071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:36.421626 containerd[2073]: 2025-01-17 12:02:36.418 [INFO][6065] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1" Jan 17 12:02:36.421626 containerd[2073]: time="2025-01-17T12:02:36.421559031Z" level=info msg="TearDown network for sandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" successfully" Jan 17 12:02:36.428486 containerd[2073]: time="2025-01-17T12:02:36.428423955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:02:36.428617 containerd[2073]: time="2025-01-17T12:02:36.428530203Z" level=info msg="RemovePodSandbox \"caa722053946ad6e3707f7eb655312e6de3ee3d637562307a5013367c82e04a1\" returns successfully" Jan 17 12:02:36.429840 containerd[2073]: time="2025-01-17T12:02:36.429247947Z" level=info msg="StopPodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\"" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.503 [WARNING][6089] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12fcbd63-a9de-44e8-887a-4b31951c12cf", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a", Pod:"coredns-76f75df574-qghwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57f53f38356", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.503 [INFO][6089] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.503 [INFO][6089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" iface="eth0" netns="" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.504 [INFO][6089] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.504 [INFO][6089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.543 [INFO][6096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.544 [INFO][6096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.544 [INFO][6096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.557 [WARNING][6096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.557 [INFO][6096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.560 [INFO][6096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:36.565916 containerd[2073]: 2025-01-17 12:02:36.562 [INFO][6089] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.567484 containerd[2073]: time="2025-01-17T12:02:36.565999552Z" level=info msg="TearDown network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" successfully" Jan 17 12:02:36.567484 containerd[2073]: time="2025-01-17T12:02:36.566047096Z" level=info msg="StopPodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" returns successfully" Jan 17 12:02:36.567484 containerd[2073]: time="2025-01-17T12:02:36.566647384Z" level=info msg="RemovePodSandbox for \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\"" Jan 17 12:02:36.567484 containerd[2073]: time="2025-01-17T12:02:36.566696224Z" level=info msg="Forcibly stopping sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\"" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.652 [WARNING][6114] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"12fcbd63-a9de-44e8-887a-4b31951c12cf", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-128", ContainerID:"12e72f6ae5a4cc42cae298856557ec78bda10912de3dd6354aeeed8d4785f41a", Pod:"coredns-76f75df574-qghwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57f53f38356", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.653 [INFO][6114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.653 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" iface="eth0" netns="" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.653 [INFO][6114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.653 [INFO][6114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.694 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.694 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.695 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.707 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.707 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" HandleID="k8s-pod-network.9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Workload="ip--172--31--23--128-k8s-coredns--76f75df574--qghwp-eth0" Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.709 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:36.715475 containerd[2073]: 2025-01-17 12:02:36.711 [INFO][6114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6" Jan 17 12:02:36.715475 containerd[2073]: time="2025-01-17T12:02:36.715429217Z" level=info msg="TearDown network for sandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" successfully" Jan 17 12:02:36.724593 containerd[2073]: time="2025-01-17T12:02:36.724481993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:02:36.724791 containerd[2073]: time="2025-01-17T12:02:36.724593089Z" level=info msg="RemovePodSandbox \"9cf9f13e27962e565bdbd30a100d46a847ddc24025ca4a79fba8fb4271da31e6\" returns successfully" Jan 17 12:02:38.143331 systemd[1]: Started sshd@12-172.31.23.128:22-139.178.68.195:35826.service - OpenSSH per-connection server daemon (139.178.68.195:35826). Jan 17 12:02:38.156119 containerd[2073]: time="2025-01-17T12:02:38.155136568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.159568 containerd[2073]: time="2025-01-17T12:02:38.159459028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:02:38.164841 containerd[2073]: time="2025-01-17T12:02:38.162741496Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.175195 containerd[2073]: time="2025-01-17T12:02:38.175130788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.180009 containerd[2073]: time="2025-01-17T12:02:38.179279296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.848085102s" Jan 17 12:02:38.180009 containerd[2073]: time="2025-01-17T12:02:38.179347432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:02:38.182196 containerd[2073]: time="2025-01-17T12:02:38.180933112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:02:38.268345 containerd[2073]: time="2025-01-17T12:02:38.268233328Z" level=info msg="CreateContainer within sandbox \"6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:02:38.297907 containerd[2073]: time="2025-01-17T12:02:38.296272288Z" level=info msg="CreateContainer within sandbox \"6fbcf035037e42c1fa415a11544fdea42de8483f17fb2eb23d21ba2aa007a52d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cc32c115a5530a4fa51a5f6c6c46d8b0ae07f16c0ed698cd4d8838a3c7e2436d\"" Jan 17 12:02:38.303117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695876034.mount: Deactivated successfully. Jan 17 12:02:38.305404 containerd[2073]: time="2025-01-17T12:02:38.305332408Z" level=info msg="StartContainer for \"cc32c115a5530a4fa51a5f6c6c46d8b0ae07f16c0ed698cd4d8838a3c7e2436d\"" Jan 17 12:02:38.421817 sshd[6130]: Accepted publickey for core from 139.178.68.195 port 35826 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:38.429962 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:38.470911 systemd-logind[2042]: New session 13 of user core. Jan 17 12:02:38.472486 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:02:38.578827 containerd[2073]: time="2025-01-17T12:02:38.574941258Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.584370 containerd[2073]: time="2025-01-17T12:02:38.583189002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:02:38.595199 containerd[2073]: time="2025-01-17T12:02:38.595137630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 414.123554ms" Jan 17 12:02:38.595914 containerd[2073]: time="2025-01-17T12:02:38.595854390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:02:38.611799 containerd[2073]: time="2025-01-17T12:02:38.609320466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:02:38.623984 containerd[2073]: time="2025-01-17T12:02:38.623399610Z" level=info msg="CreateContainer within sandbox \"992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:02:38.687965 containerd[2073]: time="2025-01-17T12:02:38.687680982Z" level=info msg="CreateContainer within sandbox \"992d1c44550f7dd5899c525853aa1ea006262c2b1745c31f08af6a761256a1c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c50366396cff58410d4e175a158aef80a5652769704e1d7845eb7a8689f0454\"" Jan 17 12:02:38.693719 containerd[2073]: time="2025-01-17T12:02:38.693663258Z" level=info msg="StartContainer for \"9c50366396cff58410d4e175a158aef80a5652769704e1d7845eb7a8689f0454\"" Jan 17 12:02:38.861916 containerd[2073]: time="2025-01-17T12:02:38.861862747Z" level=info msg="StartContainer for \"cc32c115a5530a4fa51a5f6c6c46d8b0ae07f16c0ed698cd4d8838a3c7e2436d\" returns successfully" Jan 17 12:02:38.951398 sshd[6130]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:38.965282 containerd[2073]: time="2025-01-17T12:02:38.962640056Z" level=info msg="StartContainer for \"9c50366396cff58410d4e175a158aef80a5652769704e1d7845eb7a8689f0454\" returns successfully" Jan 17 12:02:38.970985 systemd[1]: sshd@12-172.31.23.128:22-139.178.68.195:35826.service: Deactivated successfully. Jan 17 12:02:38.995279 systemd-logind[2042]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:02:38.997141 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:02:39.003669 systemd-logind[2042]: Removed session 13. Jan 17 12:02:39.970077 systemd[1]: run-containerd-runc-k8s.io-cc32c115a5530a4fa51a5f6c6c46d8b0ae07f16c0ed698cd4d8838a3c7e2436d-runc.2HfV0F.mount: Deactivated successfully. Jan 17 12:02:39.983419 kubelet[3567]: I0117 12:02:39.983355 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-598d996f57-4zkw6" podStartSLOduration=29.990794376 podStartE2EDuration="38.983285793s" podCreationTimestamp="2025-01-17 12:02:01 +0000 UTC" firstStartedPulling="2025-01-17 12:02:29.187246483 +0000 UTC m=+55.300909020" lastFinishedPulling="2025-01-17 12:02:38.179737888 +0000 UTC m=+64.293400437" observedRunningTime="2025-01-17 12:02:39.981375069 +0000 UTC m=+66.095037618" watchObservedRunningTime="2025-01-17 12:02:39.983285793 +0000 UTC m=+66.096948378" Jan 17 12:02:39.986190 kubelet[3567]: I0117 12:02:39.985337 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-679485b9d8-dxmxx" podStartSLOduration=32.822683425 podStartE2EDuration="39.985270401s" podCreationTimestamp="2025-01-17 12:02:00 +0000 UTC" firstStartedPulling="2025-01-17 12:02:31.44332087 +0000 UTC m=+57.556983419" lastFinishedPulling="2025-01-17 12:02:38.605907846 +0000 UTC m=+64.719570395" observedRunningTime="2025-01-17 12:02:39.927103412 +0000 UTC m=+66.040765985" watchObservedRunningTime="2025-01-17 12:02:39.985270401 +0000 UTC m=+66.098933130" Jan 17 12:02:40.532318 containerd[2073]: time="2025-01-17T12:02:40.532175983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:40.536402 containerd[2073]: time="2025-01-17T12:02:40.536260723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:02:40.541146 containerd[2073]: time="2025-01-17T12:02:40.540898556Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:40.547439 containerd[2073]: time="2025-01-17T12:02:40.546623252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:40.549310 containerd[2073]: time="2025-01-17T12:02:40.549241316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.935432778s" Jan 17 12:02:40.550634 containerd[2073]: time="2025-01-17T12:02:40.550578560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:02:40.555539 containerd[2073]: time="2025-01-17T12:02:40.555475016Z" level=info msg="CreateContainer within sandbox \"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:02:40.597356 containerd[2073]: time="2025-01-17T12:02:40.597047816Z" level=info msg="CreateContainer within sandbox \"2e1abe073a02df18b47b6824c51fa4fc8c835875b9bd04ed2f1db9bc957edf09\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c468c992ee96f61fedecfbe5ffffb8fcf746f06b9de6250935eea328ab3d8c85\"" Jan 17 12:02:40.601737 containerd[2073]: time="2025-01-17T12:02:40.600273752Z" level=info msg="StartContainer for \"c468c992ee96f61fedecfbe5ffffb8fcf746f06b9de6250935eea328ab3d8c85\"" Jan 17 12:02:40.751872 containerd[2073]: time="2025-01-17T12:02:40.749611077Z" level=info msg="StartContainer for \"c468c992ee96f61fedecfbe5ffffb8fcf746f06b9de6250935eea328ab3d8c85\" returns successfully" Jan 17 12:02:40.898243 kubelet[3567]: I0117 12:02:40.898114 3567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:41.473106 kubelet[3567]: I0117 12:02:41.473016 3567 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:02:41.473106 kubelet[3567]: I0117 12:02:41.473071 3567 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:02:43.078448 kubelet[3567]: I0117 12:02:43.078277 3567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:43.119251 kubelet[3567]: I0117 12:02:43.119109 3567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-q7b84" podStartSLOduration=30.813919114 podStartE2EDuration="43.11900852s" podCreationTimestamp="2025-01-17 12:02:00 +0000 UTC" firstStartedPulling="2025-01-17 12:02:28.246209346 +0000 UTC m=+54.359871895" lastFinishedPulling="2025-01-17 12:02:40.551298764 +0000 UTC m=+66.664961301" observedRunningTime="2025-01-17 12:02:40.930057825 +0000 UTC m=+67.043720410" watchObservedRunningTime="2025-01-17 12:02:43.11900852 +0000 UTC m=+69.232671165" Jan 17 12:02:43.981340 systemd[1]: Started sshd@13-172.31.23.128:22-139.178.68.195:35842.service - OpenSSH per-connection server daemon (139.178.68.195:35842). Jan 17 12:02:44.175807 sshd[6309]: Accepted publickey for core from 139.178.68.195 port 35842 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:44.178957 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:44.187683 systemd-logind[2042]: New session 14 of user core. Jan 17 12:02:44.196964 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:02:44.468085 sshd[6309]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:44.477155 systemd[1]: sshd@13-172.31.23.128:22-139.178.68.195:35842.service: Deactivated successfully. Jan 17 12:02:44.484229 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:02:44.486963 systemd-logind[2042]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:02:44.489446 systemd-logind[2042]: Removed session 14. Jan 17 12:02:49.499834 systemd[1]: Started sshd@14-172.31.23.128:22-139.178.68.195:55460.service - OpenSSH per-connection server daemon (139.178.68.195:55460). Jan 17 12:02:49.682253 sshd[6350]: Accepted publickey for core from 139.178.68.195 port 55460 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:49.685066 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:49.693031 systemd-logind[2042]: New session 15 of user core. Jan 17 12:02:49.698292 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:02:49.960968 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:49.967863 systemd-logind[2042]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:02:49.968664 systemd[1]: sshd@14-172.31.23.128:22-139.178.68.195:55460.service: Deactivated successfully. Jan 17 12:02:49.977324 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:02:49.982570 systemd-logind[2042]: Removed session 15. Jan 17 12:02:50.524979 update_engine[2045]: I20250117 12:02:50.524904 2045 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 12:02:50.525611 update_engine[2045]: I20250117 12:02:50.524986 2045 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 12:02:50.525611 update_engine[2045]: I20250117 12:02:50.525395 2045 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 12:02:50.526436 update_engine[2045]: I20250117 12:02:50.526350 2045 omaha_request_params.cc:62] Current group set to lts Jan 17 12:02:50.526740 update_engine[2045]: I20250117 12:02:50.526564 2045 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 12:02:50.526740 update_engine[2045]: I20250117 12:02:50.526598 2045 update_attempter.cc:643] Scheduling an action processor start. Jan 17 12:02:50.526740 update_engine[2045]: I20250117 12:02:50.526638 2045 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:02:50.526740 update_engine[2045]: I20250117 12:02:50.526702 2045 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 12:02:50.527008 locksmithd[2085]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 12:02:50.527444 update_engine[2045]: I20250117 12:02:50.527123 2045 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:02:50.527444 update_engine[2045]: I20250117 12:02:50.527150 2045 omaha_request_action.cc:272] Request: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: Jan 17 12:02:50.527444 update_engine[2045]: I20250117 12:02:50.527169 2045 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:02:50.531071 update_engine[2045]: I20250117 12:02:50.531005 2045 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:02:50.531543 update_engine[2045]: I20250117 12:02:50.531486 2045 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:02:50.573521 update_engine[2045]: E20250117 12:02:50.573427 2045 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:02:50.573692 update_engine[2045]: I20250117 12:02:50.573580 2045 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 12:02:51.412324 kubelet[3567]: I0117 12:02:51.412233 3567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:54.992229 systemd[1]: Started sshd@15-172.31.23.128:22-139.178.68.195:59912.service - OpenSSH per-connection server daemon (139.178.68.195:59912). Jan 17 12:02:55.176179 sshd[6371]: Accepted publickey for core from 139.178.68.195 port 59912 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:55.179682 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:55.189675 systemd-logind[2042]: New session 16 of user core. Jan 17 12:02:55.197334 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:02:55.515053 sshd[6371]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:55.530184 systemd[1]: sshd@15-172.31.23.128:22-139.178.68.195:59912.service: Deactivated successfully. Jan 17 12:02:55.540375 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:02:55.542366 systemd-logind[2042]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:02:55.556801 systemd[1]: Started sshd@16-172.31.23.128:22-139.178.68.195:59918.service - OpenSSH per-connection server daemon (139.178.68.195:59918). Jan 17 12:02:55.561516 systemd-logind[2042]: Removed session 16. Jan 17 12:02:55.751355 sshd[6385]: Accepted publickey for core from 139.178.68.195 port 59918 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:55.754357 sshd[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:55.763069 systemd-logind[2042]: New session 17 of user core. Jan 17 12:02:55.772123 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:02:56.313346 sshd[6385]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:56.330135 systemd-logind[2042]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:02:56.332735 systemd[1]: sshd@16-172.31.23.128:22-139.178.68.195:59918.service: Deactivated successfully. Jan 17 12:02:56.338672 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:02:56.363537 systemd[1]: Started sshd@17-172.31.23.128:22-139.178.68.195:59932.service - OpenSSH per-connection server daemon (139.178.68.195:59932). Jan 17 12:02:56.369148 systemd-logind[2042]: Removed session 17. Jan 17 12:02:56.556103 sshd[6397]: Accepted publickey for core from 139.178.68.195 port 59932 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:56.559387 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:56.571693 systemd-logind[2042]: New session 18 of user core. Jan 17 12:02:56.576401 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:03:00.530094 update_engine[2045]: I20250117 12:03:00.527837 2045 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:03:00.530094 update_engine[2045]: I20250117 12:03:00.528199 2045 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:03:00.534524 update_engine[2045]: I20250117 12:03:00.528512 2045 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:03:00.534829 update_engine[2045]: E20250117 12:03:00.534718 2045 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:03:00.534907 update_engine[2045]: I20250117 12:03:00.534880 2045 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 12:03:00.582057 sshd[6397]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:00.600421 systemd[1]: sshd@17-172.31.23.128:22-139.178.68.195:59932.service: Deactivated successfully. Jan 17 12:03:00.644684 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:03:00.654481 systemd-logind[2042]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:03:00.690171 systemd[1]: Started sshd@18-172.31.23.128:22-139.178.68.195:59936.service - OpenSSH per-connection server daemon (139.178.68.195:59936). Jan 17 12:03:00.697029 systemd-logind[2042]: Removed session 18. Jan 17 12:03:00.888403 sshd[6416]: Accepted publickey for core from 139.178.68.195 port 59936 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:00.892908 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:00.908581 systemd-logind[2042]: New session 19 of user core. Jan 17 12:03:00.917627 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:03:01.589164 sshd[6416]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:01.603488 systemd-logind[2042]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:03:01.605224 systemd[1]: sshd@18-172.31.23.128:22-139.178.68.195:59936.service: Deactivated successfully. Jan 17 12:03:01.618609 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:03:01.641170 systemd-logind[2042]: Removed session 19. Jan 17 12:03:01.650311 systemd[1]: Started sshd@19-172.31.23.128:22-139.178.68.195:59950.service - OpenSSH per-connection server daemon (139.178.68.195:59950). Jan 17 12:03:01.846920 sshd[6428]: Accepted publickey for core from 139.178.68.195 port 59950 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:01.850607 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:01.859499 systemd-logind[2042]: New session 20 of user core. Jan 17 12:03:01.866474 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:03:02.141020 sshd[6428]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:02.156503 systemd[1]: sshd@19-172.31.23.128:22-139.178.68.195:59950.service: Deactivated successfully. Jan 17 12:03:02.164516 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:03:02.167065 systemd-logind[2042]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:03:02.169911 systemd-logind[2042]: Removed session 20. Jan 17 12:03:07.175756 systemd[1]: Started sshd@20-172.31.23.128:22-139.178.68.195:54906.service - OpenSSH per-connection server daemon (139.178.68.195:54906). Jan 17 12:03:07.367271 sshd[6449]: Accepted publickey for core from 139.178.68.195 port 54906 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:07.371563 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:07.383230 systemd-logind[2042]: New session 21 of user core. Jan 17 12:03:07.389385 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:03:07.642140 sshd[6449]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:07.647919 systemd[1]: sshd@20-172.31.23.128:22-139.178.68.195:54906.service: Deactivated successfully. Jan 17 12:03:07.657099 systemd-logind[2042]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:03:07.658043 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:03:07.661334 systemd-logind[2042]: Removed session 21. Jan 17 12:03:10.525444 update_engine[2045]: I20250117 12:03:10.525128 2045 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:03:10.526121 update_engine[2045]: I20250117 12:03:10.525471 2045 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:03:10.526121 update_engine[2045]: I20250117 12:03:10.525828 2045 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:03:10.526329 update_engine[2045]: E20250117 12:03:10.526281 2045 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:03:10.526397 update_engine[2045]: I20250117 12:03:10.526374 2045 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 12:03:12.675353 systemd[1]: Started sshd@21-172.31.23.128:22-139.178.68.195:54908.service - OpenSSH per-connection server daemon (139.178.68.195:54908). Jan 17 12:03:12.857323 sshd[6467]: Accepted publickey for core from 139.178.68.195 port 54908 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:12.857171 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:12.859367 systemd[1]: run-containerd-runc-k8s.io-cc32c115a5530a4fa51a5f6c6c46d8b0ae07f16c0ed698cd4d8838a3c7e2436d-runc.1gMRqe.mount: Deactivated successfully. Jan 17 12:03:12.875190 systemd-logind[2042]: New session 22 of user core. Jan 17 12:03:12.883681 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:03:13.130229 sshd[6467]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:13.138289 systemd[1]: sshd@21-172.31.23.128:22-139.178.68.195:54908.service: Deactivated successfully. Jan 17 12:03:13.144352 systemd-logind[2042]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:03:13.145371 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:03:13.148975 systemd-logind[2042]: Removed session 22. Jan 17 12:03:18.163483 systemd[1]: Started sshd@22-172.31.23.128:22-139.178.68.195:40790.service - OpenSSH per-connection server daemon (139.178.68.195:40790). Jan 17 12:03:18.347629 sshd[6520]: Accepted publickey for core from 139.178.68.195 port 40790 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:18.350421 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:18.360839 systemd-logind[2042]: New session 23 of user core. Jan 17 12:03:18.365289 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:03:18.612027 sshd[6520]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:18.621129 systemd[1]: sshd@22-172.31.23.128:22-139.178.68.195:40790.service: Deactivated successfully. Jan 17 12:03:18.628645 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:03:18.631931 systemd-logind[2042]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:03:18.633930 systemd-logind[2042]: Removed session 23. Jan 17 12:03:20.526649 update_engine[2045]: I20250117 12:03:20.526546 2045 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:03:20.527516 update_engine[2045]: I20250117 12:03:20.527458 2045 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:03:20.527869 update_engine[2045]: I20250117 12:03:20.527817 2045 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:03:20.528325 update_engine[2045]: E20250117 12:03:20.528271 2045 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:03:20.528411 update_engine[2045]: I20250117 12:03:20.528353 2045 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:03:20.528411 update_engine[2045]: I20250117 12:03:20.528375 2045 omaha_request_action.cc:617] Omaha request response: Jan 17 12:03:20.528530 update_engine[2045]: E20250117 12:03:20.528492 2045 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 12:03:20.528601 update_engine[2045]: I20250117 12:03:20.528542 2045 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 12:03:20.528601 update_engine[2045]: I20250117 12:03:20.528561 2045 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:03:20.528601 update_engine[2045]: I20250117 12:03:20.528580 2045 update_attempter.cc:306] Processing Done. Jan 17 12:03:20.528739 update_engine[2045]: E20250117 12:03:20.528609 2045 update_attempter.cc:619] Update failed. Jan 17 12:03:20.528739 update_engine[2045]: I20250117 12:03:20.528626 2045 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 12:03:20.528739 update_engine[2045]: I20250117 12:03:20.528642 2045 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 12:03:20.528739 update_engine[2045]: I20250117 12:03:20.528657 2045 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 12:03:20.529051 update_engine[2045]: I20250117 12:03:20.528803 2045 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:03:20.529051 update_engine[2045]: I20250117 12:03:20.528844 2045 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:03:20.529051 update_engine[2045]: I20250117 12:03:20.528862 2045 omaha_request_action.cc:272] Request: Jan 17 12:03:20.529051 update_engine[2045]: Jan 17 12:03:20.529051 update_engine[2045]: Jan 17 12:03:20.529051 update_engine[2045]: Jan 17 12:03:20.529051 update_engine[2045]: Jan 17 12:03:20.529051 update_engine[2045]: Jan 17 12:03:20.529051 update_engine[2045]: Jan 17 12:03:20.529051 update_engine[2045]: I20250117 12:03:20.528879 2045 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:03:20.529479 update_engine[2045]: I20250117 12:03:20.529131 2045 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:03:20.529479 update_engine[2045]: I20250117 12:03:20.529435 2045 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:03:20.530031 update_engine[2045]: E20250117 12:03:20.529750 2045 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:03:20.530031 update_engine[2045]: I20250117 12:03:20.529959 2045 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:03:20.530031 update_engine[2045]: I20250117 12:03:20.529982 2045 omaha_request_action.cc:617] Omaha request response: Jan 17 12:03:20.530031 update_engine[2045]: I20250117 12:03:20.530000 2045 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:03:20.530585 locksmithd[2085]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 12:03:20.530585 locksmithd[2085]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 12:03:20.531230 update_engine[2045]: I20250117 12:03:20.530047 2045 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:03:20.531230 update_engine[2045]: I20250117 12:03:20.530066 2045 update_attempter.cc:306] Processing Done. Jan 17 12:03:20.531230 update_engine[2045]: I20250117 12:03:20.530083 2045 update_attempter.cc:310] Error event sent. Jan 17 12:03:20.531230 update_engine[2045]: I20250117 12:03:20.530105 2045 update_check_scheduler.cc:74] Next update check in 49m51s Jan 17 12:03:23.644372 systemd[1]: Started sshd@23-172.31.23.128:22-139.178.68.195:40796.service - OpenSSH per-connection server daemon (139.178.68.195:40796). Jan 17 12:03:23.832499 sshd[6554]: Accepted publickey for core from 139.178.68.195 port 40796 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:23.835348 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:23.843960 systemd-logind[2042]: New session 24 of user core. Jan 17 12:03:23.850262 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:03:24.105211 sshd[6554]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:24.114409 systemd[1]: sshd@23-172.31.23.128:22-139.178.68.195:40796.service: Deactivated successfully. Jan 17 12:03:24.122093 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:03:24.124448 systemd-logind[2042]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:03:24.127540 systemd-logind[2042]: Removed session 24. Jan 17 12:03:29.137274 systemd[1]: Started sshd@24-172.31.23.128:22-139.178.68.195:57884.service - OpenSSH per-connection server daemon (139.178.68.195:57884). Jan 17 12:03:29.315584 sshd[6569]: Accepted publickey for core from 139.178.68.195 port 57884 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:29.318479 sshd[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:29.328863 systemd-logind[2042]: New session 25 of user core. Jan 17 12:03:29.334283 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:03:29.580338 sshd[6569]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:29.590388 systemd[1]: sshd@24-172.31.23.128:22-139.178.68.195:57884.service: Deactivated successfully. Jan 17 12:03:29.595920 systemd-logind[2042]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:03:29.596992 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:03:29.600711 systemd-logind[2042]: Removed session 25. Jan 17 12:03:34.615335 systemd[1]: Started sshd@25-172.31.23.128:22-139.178.68.195:57886.service - OpenSSH per-connection server daemon (139.178.68.195:57886). Jan 17 12:03:34.789561 sshd[6585]: Accepted publickey for core from 139.178.68.195 port 57886 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:34.792294 sshd[6585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:34.801857 systemd-logind[2042]: New session 26 of user core. Jan 17 12:03:34.806880 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:03:35.047599 sshd[6585]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:35.055288 systemd[1]: sshd@25-172.31.23.128:22-139.178.68.195:57886.service: Deactivated successfully. Jan 17 12:03:35.064408 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:03:35.066393 systemd-logind[2042]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:03:35.069409 systemd-logind[2042]: Removed session 26. Jan 17 12:03:42.863240 systemd[1]: run-containerd-runc-k8s.io-cc32c115a5530a4fa51a5f6c6c46d8b0ae07f16c0ed698cd4d8838a3c7e2436d-runc.XeWDFL.mount: Deactivated successfully. Jan 17 12:03:48.523437 containerd[2073]: time="2025-01-17T12:03:48.523335985Z" level=info msg="shim disconnected" id=f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a namespace=k8s.io Jan 17 12:03:48.524282 containerd[2073]: time="2025-01-17T12:03:48.523431109Z" level=warning msg="cleaning up after shim disconnected" id=f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a namespace=k8s.io Jan 17 12:03:48.524282 containerd[2073]: time="2025-01-17T12:03:48.524247949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:48.528691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a-rootfs.mount: Deactivated successfully. Jan 17 12:03:49.120304 kubelet[3567]: I0117 12:03:49.120229 3567 scope.go:117] "RemoveContainer" containerID="f149a7949b17b1503ee37fba96d3a317d0ee39ae160518b52ba3ec439a91183a" Jan 17 12:03:49.124393 containerd[2073]: time="2025-01-17T12:03:49.124103112Z" level=info msg="CreateContainer within sandbox \"1559873c9eb486440c7ce38a914361044bb5692ddb4fa204bcc645ede33398fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 12:03:49.154572 containerd[2073]: time="2025-01-17T12:03:49.153932124Z" level=info msg="CreateContainer within sandbox \"1559873c9eb486440c7ce38a914361044bb5692ddb4fa204bcc645ede33398fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"73eb46d6ca689bfa44608f3cd88acca94a83c9f2dcbebbe7a57cf43a0d5e68f8\"" Jan 17 12:03:49.155114 containerd[2073]: time="2025-01-17T12:03:49.155055852Z" level=info msg="StartContainer for \"73eb46d6ca689bfa44608f3cd88acca94a83c9f2dcbebbe7a57cf43a0d5e68f8\"" Jan 17 12:03:49.257397 containerd[2073]: time="2025-01-17T12:03:49.257223697Z" level=info msg="StartContainer for \"73eb46d6ca689bfa44608f3cd88acca94a83c9f2dcbebbe7a57cf43a0d5e68f8\" returns successfully" Jan 17 12:03:49.385969 containerd[2073]: time="2025-01-17T12:03:49.385430617Z" level=info msg="shim disconnected" id=9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39 namespace=k8s.io Jan 17 12:03:49.385969 containerd[2073]: time="2025-01-17T12:03:49.385624057Z" level=warning msg="cleaning up after shim disconnected" id=9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39 namespace=k8s.io Jan 17 12:03:49.385969 containerd[2073]: time="2025-01-17T12:03:49.385678081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:49.525719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39-rootfs.mount: Deactivated successfully. Jan 17 12:03:50.129481 kubelet[3567]: I0117 12:03:50.129336 3567 scope.go:117] "RemoveContainer" containerID="9400088a79d9ca874756593857d5b0f6c1e026bd0e386aa6c560108373fa5c39" Jan 17 12:03:50.135690 containerd[2073]: time="2025-01-17T12:03:50.135626281Z" level=info msg="CreateContainer within sandbox \"8821a08a45aa9986bf0d0f9f9540802dbfd840ef0b8c3410417149bbc79657e7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 12:03:50.168018 containerd[2073]: time="2025-01-17T12:03:50.167758321Z" level=info msg="CreateContainer within sandbox \"8821a08a45aa9986bf0d0f9f9540802dbfd840ef0b8c3410417149bbc79657e7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6d8ece6b9fd3d77994fd377a15877c232cc5b57c88bc4a7ec2575f4dd77a8480\"" Jan 17 12:03:50.169689 containerd[2073]: time="2025-01-17T12:03:50.169607221Z" level=info msg="StartContainer for \"6d8ece6b9fd3d77994fd377a15877c232cc5b57c88bc4a7ec2575f4dd77a8480\"" Jan 17 12:03:50.311376 containerd[2073]: time="2025-01-17T12:03:50.311289914Z" level=info msg="StartContainer for \"6d8ece6b9fd3d77994fd377a15877c232cc5b57c88bc4a7ec2575f4dd77a8480\" returns successfully" Jan 17 12:03:53.317675 containerd[2073]: time="2025-01-17T12:03:53.317586125Z" level=info msg="shim disconnected" id=78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0 namespace=k8s.io Jan 17 12:03:53.317675 containerd[2073]: time="2025-01-17T12:03:53.317671469Z" level=warning msg="cleaning up after shim disconnected" id=78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0 namespace=k8s.io Jan 17 12:03:53.318384 containerd[2073]: time="2025-01-17T12:03:53.317698301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:53.320024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0-rootfs.mount: Deactivated successfully. Jan 17 12:03:54.149586 kubelet[3567]: I0117 12:03:54.149526 3567 scope.go:117] "RemoveContainer" containerID="78084d7c1af4db9ef2940a5a45e5be68eb9bbb96b86b938bf7042e0d3040c8c0" Jan 17 12:03:54.153683 containerd[2073]: time="2025-01-17T12:03:54.153559769Z" level=info msg="CreateContainer within sandbox \"9f81b0100e119e8b90b6b78d225498078cdcb0bd6d469f0902286fc677e5f9d8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 12:03:54.186353 containerd[2073]: time="2025-01-17T12:03:54.186247865Z" level=info msg="CreateContainer within sandbox \"9f81b0100e119e8b90b6b78d225498078cdcb0bd6d469f0902286fc677e5f9d8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d52e176816a864d85c49d36d8ca314c33b14273f9f4472ec6473f1b35c777737\"" Jan 17 12:03:54.187422 containerd[2073]: time="2025-01-17T12:03:54.187375841Z" level=info msg="StartContainer for \"d52e176816a864d85c49d36d8ca314c33b14273f9f4472ec6473f1b35c777737\"" Jan 17 12:03:54.308811 containerd[2073]: time="2025-01-17T12:03:54.308066466Z" level=info msg="StartContainer for \"d52e176816a864d85c49d36d8ca314c33b14273f9f4472ec6473f1b35c777737\" returns successfully" Jan 17 12:03:57.088061 kubelet[3567]: E0117 12:03:57.087991 3567 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-128?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:04:07.088661 kubelet[3567]: E0117 12:04:07.088595 3567 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-128?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"