Jan 17 12:01:14.191103 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 12:01:14.191149 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:01:14.191174 kernel: KASLR disabled due to lack of seed Jan 17 12:01:14.191190 kernel: efi: EFI v2.7 by EDK II Jan 17 12:01:14.191225 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 17 12:01:14.191259 kernel: ACPI: Early table checksum verification disabled Jan 17 12:01:14.191283 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 12:01:14.191299 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 12:01:14.191316 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:01:14.191331 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 17 12:01:14.191353 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:01:14.191369 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 12:01:14.191384 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 12:01:14.191400 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 12:01:14.191418 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:01:14.191439 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 12:01:14.191456 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 12:01:14.191472 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 12:01:14.191489 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 12:01:14.191505 kernel: printk: bootconsole [uart0] enabled Jan 17 12:01:14.191521 kernel: NUMA: Failed to initialise from firmware Jan 17 12:01:14.191538 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:01:14.191554 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 12:01:14.191571 kernel: Zone ranges: Jan 17 12:01:14.191587 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 12:01:14.191603 kernel: DMA32 empty Jan 17 12:01:14.191624 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 12:01:14.191641 kernel: Movable zone start for each node Jan 17 12:01:14.191657 kernel: Early memory node ranges Jan 17 12:01:14.191673 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 12:01:14.191689 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 12:01:14.191706 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 12:01:14.191722 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 12:01:14.191738 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 12:01:14.191754 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 12:01:14.191771 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 12:01:14.191787 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 12:01:14.191803 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:01:14.191824 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 12:01:14.191841 kernel: psci: probing for conduit method from ACPI. Jan 17 12:01:14.191865 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 12:01:14.191883 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:01:14.191901 kernel: psci: Trusted OS migration not required Jan 17 12:01:14.191923 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:01:14.191940 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:01:14.191957 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:01:14.191975 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:01:14.191992 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:01:14.192009 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:01:14.192026 kernel: CPU features: detected: Spectre-v2 Jan 17 12:01:14.192043 kernel: CPU features: detected: Spectre-v3a Jan 17 12:01:14.192060 kernel: CPU features: detected: Spectre-BHB Jan 17 12:01:14.192077 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 12:01:14.192094 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 12:01:14.192116 kernel: alternatives: applying boot alternatives Jan 17 12:01:14.192136 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:01:14.192154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:01:14.192172 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:01:14.192189 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:01:14.192206 kernel: Fallback order for Node 0: 0 Jan 17 12:01:14.192223 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 12:01:14.192240 kernel: Policy zone: Normal Jan 17 12:01:14.192277 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:01:14.192295 kernel: software IO TLB: area num 2. Jan 17 12:01:14.192313 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 12:01:14.192338 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 17 12:01:14.192355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:01:14.192372 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:01:14.192390 kernel: rcu: RCU event tracing is enabled. Jan 17 12:01:14.192408 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:01:14.192425 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:01:14.192443 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:01:14.192460 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:01:14.192477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:01:14.192494 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:01:14.192511 kernel: GICv3: 96 SPIs implemented Jan 17 12:01:14.192533 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:01:14.192551 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:01:14.192568 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 12:01:14.193535 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 12:01:14.193555 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 12:01:14.193574 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:01:14.193592 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:01:14.193609 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 12:01:14.193627 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 12:01:14.193645 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 12:01:14.193662 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:01:14.193680 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 12:01:14.193706 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 12:01:14.193724 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 12:01:14.193742 kernel: Console: colour dummy device 80x25 Jan 17 12:01:14.193760 kernel: printk: console [tty1] enabled Jan 17 12:01:14.193777 kernel: ACPI: Core revision 20230628 Jan 17 12:01:14.193795 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 12:01:14.193813 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:01:14.193831 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:01:14.193848 kernel: landlock: Up and running. Jan 17 12:01:14.193871 kernel: SELinux: Initializing. Jan 17 12:01:14.193889 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:01:14.193907 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:01:14.193925 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:01:14.193943 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:01:14.193961 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:01:14.193979 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:01:14.193997 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 12:01:14.194015 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 12:01:14.194038 kernel: Remapping and enabling EFI services. Jan 17 12:01:14.194056 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:01:14.194074 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:01:14.194091 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 12:01:14.194110 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 12:01:14.194127 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 12:01:14.194145 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:01:14.194162 kernel: SMP: Total of 2 processors activated. Jan 17 12:01:14.194179 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:01:14.194202 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 12:01:14.194220 kernel: CPU features: detected: CRC32 instructions Jan 17 12:01:14.194238 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:01:14.194417 kernel: alternatives: applying system-wide alternatives Jan 17 12:01:14.194443 kernel: devtmpfs: initialized Jan 17 12:01:14.194462 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:01:14.194481 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:01:14.194499 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:01:14.194518 kernel: SMBIOS 3.0.0 present. Jan 17 12:01:14.194536 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 12:01:14.194560 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:01:14.194578 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:01:14.194597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:01:14.194615 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:01:14.194633 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:01:14.194652 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Jan 17 12:01:14.194670 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:01:14.194693 kernel: cpuidle: using governor menu Jan 17 12:01:14.194712 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:01:14.194731 kernel: ASID allocator initialised with 65536 entries Jan 17 12:01:14.194749 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:01:14.194767 kernel: Serial: AMBA PL011 UART driver Jan 17 12:01:14.194785 kernel: Modules: 17520 pages in range for non-PLT usage Jan 17 12:01:14.194804 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:01:14.194822 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:01:14.194841 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:01:14.194864 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:01:14.194883 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:01:14.194901 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:01:14.194919 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:01:14.194938 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:01:14.194956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:01:14.194974 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:01:14.194993 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:01:14.195011 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:01:14.195034 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:01:14.195053 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:01:14.195071 kernel: ACPI: Interpreter enabled Jan 17 12:01:14.195089 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:01:14.195108 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:01:14.195126 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 17 12:01:14.195480 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:01:14.195702 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:01:14.195916 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:01:14.196121 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 17 12:01:14.197152 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 17 12:01:14.197195 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 12:01:14.197215 kernel: acpiphp: Slot [1] registered Jan 17 12:01:14.197234 kernel: acpiphp: Slot [2] registered Jan 17 12:01:14.199516 kernel: acpiphp: Slot [3] registered Jan 17 12:01:14.199542 kernel: acpiphp: Slot [4] registered Jan 17 12:01:14.199571 kernel: acpiphp: Slot [5] registered Jan 17 12:01:14.199591 kernel: acpiphp: Slot [6] registered Jan 17 12:01:14.199610 kernel: acpiphp: Slot [7] registered Jan 17 12:01:14.199628 kernel: acpiphp: Slot [8] registered Jan 17 12:01:14.199647 kernel: acpiphp: Slot [9] registered Jan 17 12:01:14.199666 kernel: acpiphp: Slot [10] registered Jan 17 12:01:14.199684 kernel: acpiphp: Slot [11] registered Jan 17 12:01:14.199703 kernel: acpiphp: Slot [12] registered Jan 17 12:01:14.199734 kernel: acpiphp: Slot [13] registered Jan 17 12:01:14.199759 kernel: acpiphp: Slot [14] registered Jan 17 12:01:14.199786 kernel: acpiphp: Slot [15] registered Jan 17 12:01:14.199805 kernel: acpiphp: Slot [16] registered Jan 17 12:01:14.199825 kernel: acpiphp: Slot [17] registered Jan 17 12:01:14.199845 kernel: acpiphp: Slot [18] registered Jan 17 12:01:14.199864 kernel: acpiphp: Slot [19] registered Jan 17 12:01:14.199882 kernel: acpiphp: Slot [20] registered Jan 17 12:01:14.199902 kernel: acpiphp: Slot [21] registered Jan 17 12:01:14.199921 kernel: acpiphp: Slot [22] registered Jan 17 12:01:14.199939 kernel: acpiphp: Slot [23] registered Jan 17 12:01:14.199963 kernel: acpiphp: Slot [24] registered Jan 17 12:01:14.199984 kernel: acpiphp: Slot [25] registered Jan 17 12:01:14.200003 kernel: acpiphp: Slot [26] registered Jan 17 12:01:14.200021 kernel: acpiphp: Slot [27] registered Jan 17 12:01:14.200040 kernel: acpiphp: Slot [28] registered Jan 17 12:01:14.200058 kernel: acpiphp: Slot [29] registered Jan 17 12:01:14.200078 kernel: acpiphp: Slot [30] registered Jan 17 12:01:14.200097 kernel: acpiphp: Slot [31] registered Jan 17 12:01:14.200115 kernel: PCI host bridge to bus 0000:00 Jan 17 12:01:14.200457 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 12:01:14.200666 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:01:14.200895 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 12:01:14.203488 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 17 12:01:14.203780 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 12:01:14.204005 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 12:01:14.204216 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 12:01:14.204544 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:01:14.204759 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 12:01:14.204967 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:01:14.205193 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:01:14.205500 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 12:01:14.205717 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 12:01:14.205934 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 12:01:14.206145 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:01:14.206443 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 17 12:01:14.206762 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 17 12:01:14.207015 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 17 12:01:14.209572 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 17 12:01:14.209784 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 17 12:01:14.209986 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 12:01:14.210170 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:01:14.210402 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 12:01:14.210430 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:01:14.210450 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:01:14.210469 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:01:14.210488 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:01:14.210507 kernel: iommu: Default domain type: Translated Jan 17 12:01:14.210526 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:01:14.210551 kernel: efivars: Registered efivars operations Jan 17 12:01:14.210569 kernel: vgaarb: loaded Jan 17 12:01:14.210588 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:01:14.210606 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:01:14.210624 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:01:14.210643 kernel: pnp: PnP ACPI init Jan 17 12:01:14.213044 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 12:01:14.213086 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:01:14.213117 kernel: NET: Registered PF_INET protocol family Jan 17 12:01:14.213136 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:01:14.213155 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:01:14.213174 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:01:14.213193 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:01:14.213212 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:01:14.213231 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:01:14.213270 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:01:14.213292 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:01:14.213318 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:01:14.213337 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:01:14.213356 kernel: kvm [1]: HYP mode not available Jan 17 12:01:14.213374 kernel: Initialise system trusted keyrings Jan 17 12:01:14.213393 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:01:14.213412 kernel: Key type asymmetric registered Jan 17 12:01:14.213431 kernel: Asymmetric key parser 'x509' registered Jan 17 12:01:14.213450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:01:14.213468 kernel: io scheduler mq-deadline registered Jan 17 12:01:14.213492 kernel: io scheduler kyber registered Jan 17 12:01:14.213511 kernel: io scheduler bfq registered Jan 17 12:01:14.213753 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 12:01:14.213783 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:01:14.213802 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:01:14.213821 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 12:01:14.213839 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 12:01:14.213858 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:01:14.213883 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 12:01:14.214094 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 12:01:14.214124 kernel: printk: console [ttyS0] disabled Jan 17 12:01:14.214145 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 12:01:14.214164 kernel: printk: console [ttyS0] enabled Jan 17 12:01:14.214183 kernel: printk: bootconsole [uart0] disabled Jan 17 12:01:14.214203 kernel: thunder_xcv, ver 1.0 Jan 17 12:01:14.214222 kernel: thunder_bgx, ver 1.0 Jan 17 12:01:14.214241 kernel: nicpf, ver 1.0 Jan 17 12:01:14.216364 kernel: nicvf, ver 1.0 Jan 17 12:01:14.216698 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:01:14.216899 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:01:13 UTC (1737115273) Jan 17 12:01:14.216926 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:01:14.216946 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 12:01:14.216965 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:01:14.216984 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:01:14.217003 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:01:14.217028 kernel: Segment Routing with IPv6 Jan 17 12:01:14.217048 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:01:14.217066 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:01:14.217085 kernel: Key type dns_resolver registered Jan 17 12:01:14.217103 kernel: registered taskstats version 1 Jan 17 12:01:14.217122 kernel: Loading compiled-in X.509 certificates Jan 17 12:01:14.217142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:01:14.217161 kernel: Key type .fscrypt registered Jan 17 12:01:14.217179 kernel: Key type fscrypt-provisioning registered Jan 17 12:01:14.217204 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:01:14.217224 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:01:14.217242 kernel: ima: No architecture policies found Jan 17 12:01:14.217290 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:01:14.217310 kernel: clk: Disabling unused clocks Jan 17 12:01:14.217329 kernel: Freeing unused kernel memory: 39360K Jan 17 12:01:14.217348 kernel: Run /init as init process Jan 17 12:01:14.217366 kernel: with arguments: Jan 17 12:01:14.217385 kernel: /init Jan 17 12:01:14.217403 kernel: with environment: Jan 17 12:01:14.217427 kernel: HOME=/ Jan 17 12:01:14.217446 kernel: TERM=linux Jan 17 12:01:14.217464 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:01:14.217487 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:01:14.217511 systemd[1]: Detected virtualization amazon. Jan 17 12:01:14.217532 systemd[1]: Detected architecture arm64. Jan 17 12:01:14.217552 systemd[1]: Running in initrd. Jan 17 12:01:14.217576 systemd[1]: No hostname configured, using default hostname. Jan 17 12:01:14.217597 systemd[1]: Hostname set to . Jan 17 12:01:14.217618 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:01:14.217638 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:01:14.217659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:14.217681 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:14.217704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:01:14.217725 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:01:14.217750 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:01:14.217772 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:01:14.217796 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:01:14.217817 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:01:14.217837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:14.217857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:14.217878 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:01:14.217904 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:01:14.217924 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:01:14.217944 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:01:14.217965 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:01:14.217986 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:01:14.218007 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:01:14.218027 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:01:14.218048 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:14.218068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:14.218094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:14.218114 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:01:14.218134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:01:14.218155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:01:14.218175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:01:14.218195 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:01:14.218216 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:01:14.218236 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:01:14.220347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:14.220378 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:01:14.220400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:14.220469 systemd-journald[251]: Collecting audit messages is disabled. Jan 17 12:01:14.220518 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:01:14.220542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:01:14.220563 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:01:14.220584 kernel: Bridge firewalling registered Jan 17 12:01:14.220609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:14.220631 systemd-journald[251]: Journal started Jan 17 12:01:14.220670 systemd-journald[251]: Runtime Journal (/run/log/journal/ec25379a1a96d8e72093beeee1e4e26f) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:01:14.170374 systemd-modules-load[252]: Inserted module 'overlay' Jan 17 12:01:14.205334 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 17 12:01:14.243288 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:01:14.243345 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:01:14.246492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:01:14.253433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:01:14.259721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:14.274644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:14.284672 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:01:14.291692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:14.313960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:14.327671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:01:14.333348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:14.349263 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:14.357529 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:01:14.394277 dracut-cmdline[289]: dracut-dracut-053 Jan 17 12:01:14.397895 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:01:14.439688 systemd-resolved[283]: Positive Trust Anchors: Jan 17 12:01:14.441309 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:01:14.441377 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:01:14.586290 kernel: SCSI subsystem initialized Jan 17 12:01:14.594288 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:01:14.606280 kernel: iscsi: registered transport (tcp) Jan 17 12:01:14.628853 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:01:14.628944 kernel: QLogic iSCSI HBA Driver Jan 17 12:01:14.694403 kernel: random: crng init done Jan 17 12:01:14.694743 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 17 12:01:14.696868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:01:14.700432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:14.728341 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:01:14.739548 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:01:14.779629 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:01:14.779745 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:01:14.779779 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:01:14.862298 kernel: raid6: neonx8 gen() 6726 MB/s Jan 17 12:01:14.864279 kernel: raid6: neonx4 gen() 6568 MB/s Jan 17 12:01:14.881284 kernel: raid6: neonx2 gen() 5463 MB/s Jan 17 12:01:14.898280 kernel: raid6: neonx1 gen() 3960 MB/s Jan 17 12:01:14.915281 kernel: raid6: int64x8 gen() 3689 MB/s Jan 17 12:01:14.932279 kernel: raid6: int64x4 gen() 3706 MB/s Jan 17 12:01:14.949282 kernel: raid6: int64x2 gen() 3581 MB/s Jan 17 12:01:14.967016 kernel: raid6: int64x1 gen() 2742 MB/s Jan 17 12:01:14.967060 kernel: raid6: using algorithm neonx8 gen() 6726 MB/s Jan 17 12:01:14.984998 kernel: raid6: .... xor() 4852 MB/s, rmw enabled Jan 17 12:01:14.985054 kernel: raid6: using neon recovery algorithm Jan 17 12:01:14.993462 kernel: xor: measuring software checksum speed Jan 17 12:01:14.993532 kernel: 8regs : 10973 MB/sec Jan 17 12:01:14.994532 kernel: 32regs : 11920 MB/sec Jan 17 12:01:14.995688 kernel: arm64_neon : 9580 MB/sec Jan 17 12:01:14.995722 kernel: xor: using function: 32regs (11920 MB/sec) Jan 17 12:01:15.080293 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:01:15.099411 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:01:15.109597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:15.150689 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 17 12:01:15.160120 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:15.174856 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:01:15.213696 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jan 17 12:01:15.272919 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:01:15.281585 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:01:15.407742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:15.420286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:01:15.466929 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:01:15.471722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:01:15.474150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:15.474916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:01:15.488123 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:01:15.532274 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:01:15.620453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:01:15.621894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:15.635839 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:01:15.635879 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 12:01:15.671910 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:01:15.672219 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:01:15.673231 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:01:cc:f8:0f:83 Jan 17 12:01:15.623783 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:15.623832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:01:15.624100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:15.624205 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:15.654950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:15.682064 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:15.702974 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 12:01:15.703036 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:01:15.712273 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:01:15.712435 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:15.722981 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:01:15.723056 kernel: GPT:9289727 != 16777215 Jan 17 12:01:15.723083 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:01:15.725339 kernel: GPT:9289727 != 16777215 Jan 17 12:01:15.725420 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:01:15.726030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:15.731319 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:15.762063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:15.853551 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (545) Jan 17 12:01:15.869294 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (521) Jan 17 12:01:15.955736 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:01:15.973686 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:01:15.990715 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:01:16.004846 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:01:16.007431 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:01:16.035631 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:01:16.048769 disk-uuid[664]: Primary Header is updated. Jan 17 12:01:16.048769 disk-uuid[664]: Secondary Entries is updated. Jan 17 12:01:16.048769 disk-uuid[664]: Secondary Header is updated. Jan 17 12:01:16.058288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:16.068322 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:16.076278 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:17.083424 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:17.084181 disk-uuid[665]: The operation has completed successfully. Jan 17 12:01:17.257048 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:01:17.257293 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:01:17.314601 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:01:17.328220 sh[1008]: Success Jan 17 12:01:17.353302 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:01:17.470825 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:01:17.483460 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:01:17.489640 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:01:17.533265 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:01:17.533329 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:17.533357 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:01:17.533859 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:01:17.535041 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:01:17.680294 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:01:17.698302 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:01:17.702218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:01:17.715500 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:01:17.722553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:01:17.753269 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:17.753359 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:17.753395 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:17.761294 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:17.779424 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:17.778876 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:01:17.788687 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:01:17.800627 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:01:17.908296 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:01:17.920585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:01:17.981825 systemd-networkd[1201]: lo: Link UP Jan 17 12:01:17.981842 systemd-networkd[1201]: lo: Gained carrier Jan 17 12:01:17.987521 systemd-networkd[1201]: Enumeration completed Jan 17 12:01:17.988983 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:01:17.991488 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:17.991495 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:01:17.994649 systemd[1]: Reached target network.target - Network. Jan 17 12:01:18.000855 systemd-networkd[1201]: eth0: Link UP Jan 17 12:01:18.000863 systemd-networkd[1201]: eth0: Gained carrier Jan 17 12:01:18.000880 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:18.033354 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.30.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:01:18.195100 ignition[1108]: Ignition 2.19.0 Jan 17 12:01:18.195129 ignition[1108]: Stage: fetch-offline Jan 17 12:01:18.196757 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:18.196790 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:18.198521 ignition[1108]: Ignition finished successfully Jan 17 12:01:18.206338 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:01:18.215595 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:01:18.248462 ignition[1210]: Ignition 2.19.0 Jan 17 12:01:18.248491 ignition[1210]: Stage: fetch Jan 17 12:01:18.250101 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:18.250127 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:18.250331 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:18.273389 ignition[1210]: PUT result: OK Jan 17 12:01:18.276788 ignition[1210]: parsed url from cmdline: "" Jan 17 12:01:18.276804 ignition[1210]: no config URL provided Jan 17 12:01:18.276821 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:01:18.276847 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:01:18.276910 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:18.282488 ignition[1210]: PUT result: OK Jan 17 12:01:18.283733 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:01:18.287941 ignition[1210]: GET result: OK Jan 17 12:01:18.288167 ignition[1210]: parsing config with SHA512: 24750e1875fab11b87754f560fac24408a8074b5b0bb01f616036aee346efb84c71a979fef87e79833049b7866b20c7ad51fa4c8104e1a010d6e8949224e9ab3 Jan 17 12:01:18.297286 unknown[1210]: fetched base config from "system" Jan 17 12:01:18.297551 unknown[1210]: fetched base config from "system" Jan 17 12:01:18.297578 unknown[1210]: fetched user config from "aws" Jan 17 12:01:18.303143 ignition[1210]: fetch: fetch complete Jan 17 12:01:18.303167 ignition[1210]: fetch: fetch passed Jan 17 12:01:18.303315 ignition[1210]: Ignition finished successfully Jan 17 12:01:18.312629 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:01:18.328463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:01:18.352694 ignition[1216]: Ignition 2.19.0 Jan 17 12:01:18.352723 ignition[1216]: Stage: kargs Jan 17 12:01:18.354405 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:18.354432 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:18.355511 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:18.359850 ignition[1216]: PUT result: OK Jan 17 12:01:18.366440 ignition[1216]: kargs: kargs passed Jan 17 12:01:18.366620 ignition[1216]: Ignition finished successfully Jan 17 12:01:18.371733 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:01:18.381535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:01:18.408579 ignition[1222]: Ignition 2.19.0 Jan 17 12:01:18.408608 ignition[1222]: Stage: disks Jan 17 12:01:18.410225 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:18.410270 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:18.410801 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:18.417627 ignition[1222]: PUT result: OK Jan 17 12:01:18.422486 ignition[1222]: disks: disks passed Jan 17 12:01:18.422642 ignition[1222]: Ignition finished successfully Jan 17 12:01:18.428319 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:01:18.430962 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:01:18.433712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:01:18.434229 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:01:18.434817 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:01:18.435114 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:01:18.459635 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:01:18.502035 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:01:18.509415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:01:18.520487 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:01:18.617287 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:01:18.618207 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:01:18.621995 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:01:18.635426 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:01:18.641995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:01:18.644203 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:01:18.644348 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:01:18.644401 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:01:18.671288 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Jan 17 12:01:18.675819 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:18.675890 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:18.677787 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:01:18.680961 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:18.690617 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:01:18.699268 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:18.702788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:01:19.162932 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:01:19.171136 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:01:19.180056 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:01:19.198814 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:01:19.548607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:01:19.564449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:01:19.569509 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:01:19.588299 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:01:19.594333 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:19.640696 ignition[1362]: INFO : Ignition 2.19.0 Jan 17 12:01:19.640696 ignition[1362]: INFO : Stage: mount Jan 17 12:01:19.640470 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:01:19.648240 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:19.648240 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:19.648240 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:19.654611 ignition[1362]: INFO : PUT result: OK Jan 17 12:01:19.660673 ignition[1362]: INFO : mount: mount passed Jan 17 12:01:19.660673 ignition[1362]: INFO : Ignition finished successfully Jan 17 12:01:19.667305 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:01:19.677434 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:01:19.708668 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:01:19.732290 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Jan 17 12:01:19.736225 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:19.736293 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:19.736322 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:19.743291 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:19.746956 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:01:19.792338 ignition[1391]: INFO : Ignition 2.19.0 Jan 17 12:01:19.792338 ignition[1391]: INFO : Stage: files Jan 17 12:01:19.795612 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:19.795612 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:19.795612 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:19.802536 ignition[1391]: INFO : PUT result: OK Jan 17 12:01:19.806781 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:01:19.810818 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:01:19.810818 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:01:19.855936 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:01:19.858574 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:01:19.861665 unknown[1391]: wrote ssh authorized keys file for user: core Jan 17 12:01:19.864850 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:01:19.867277 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:01:19.867277 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:01:19.960519 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:01:20.030389 systemd-networkd[1201]: eth0: Gained IPv6LL Jan 17 12:01:20.100091 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:01:20.103827 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:01:20.107216 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:01:20.110470 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:01:20.113847 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:01:20.117207 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:01:20.121124 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:01:20.121124 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:01:20.121124 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:01:20.130897 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:01:20.130897 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:01:20.130897 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:01:20.130897 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:01:20.130897 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:01:20.130897 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 17 12:01:20.490068 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:01:20.838378 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:01:20.838378 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:01:20.844696 ignition[1391]: INFO : files: files passed Jan 17 12:01:20.844696 ignition[1391]: INFO : Ignition finished successfully Jan 17 12:01:20.870182 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:01:20.893617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:01:20.900858 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:01:20.907853 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:01:20.909762 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:01:20.950650 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:20.950650 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:20.957914 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:20.964273 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:01:20.966954 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:01:20.981554 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:01:21.047066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:01:21.048940 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:01:21.052151 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:01:21.057236 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:01:21.059189 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:01:21.076964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:01:21.103318 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:01:21.123690 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:01:21.145514 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:21.149780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:21.152206 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:01:21.154034 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:01:21.154291 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:01:21.156963 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:01:21.159058 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:01:21.160921 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:01:21.163136 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:01:21.165489 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:01:21.167772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:01:21.169895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:01:21.172786 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:01:21.194238 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:01:21.196351 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:01:21.198041 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:01:21.198298 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:01:21.200830 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:21.215600 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:21.217904 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:01:21.222017 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:21.224529 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:01:21.224765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:01:21.227270 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:01:21.227534 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:01:21.239969 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:01:21.240193 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:01:21.262745 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:01:21.270481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:01:21.274496 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:21.292334 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:01:21.297033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:01:21.299442 ignition[1443]: INFO : Ignition 2.19.0 Jan 17 12:01:21.299442 ignition[1443]: INFO : Stage: umount Jan 17 12:01:21.307701 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:21.307701 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:21.307701 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:21.299768 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:21.307717 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:01:21.317855 ignition[1443]: INFO : PUT result: OK Jan 17 12:01:21.319269 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:01:21.333779 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:01:21.336562 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:01:21.344373 ignition[1443]: INFO : umount: umount passed Jan 17 12:01:21.346289 ignition[1443]: INFO : Ignition finished successfully Jan 17 12:01:21.349878 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:01:21.352052 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:01:21.359241 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:01:21.361225 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:01:21.366585 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:01:21.366703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:01:21.371700 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:01:21.371811 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:01:21.375997 systemd[1]: Stopped target network.target - Network. Jan 17 12:01:21.387421 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:01:21.389198 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:01:21.393241 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:01:21.394965 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:01:21.402750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:21.405155 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:01:21.410936 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:01:21.412753 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:01:21.412841 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:01:21.415549 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:01:21.415625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:01:21.426101 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:01:21.426210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:01:21.431260 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:01:21.431356 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:01:21.433623 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:01:21.435930 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:01:21.447546 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:01:21.448612 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:01:21.448796 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:01:21.452367 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jan 17 12:01:21.456970 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:01:21.457130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:01:21.462145 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:01:21.462412 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:01:21.465322 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:01:21.465453 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:21.476600 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:01:21.480444 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:01:21.480583 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:01:21.483084 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:21.499386 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:01:21.502687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:01:21.515977 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:01:21.516157 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:21.523441 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:01:21.523562 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:21.525631 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:01:21.525720 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:21.542866 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:01:21.543457 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:21.554362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:01:21.554549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:21.556780 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:01:21.556880 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:21.559669 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:01:21.559773 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:01:21.569135 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:01:21.569530 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:01:21.579685 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:01:21.579798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:21.602365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:01:21.604555 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:01:21.604676 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:21.609695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:01:21.609810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:21.615196 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:01:21.616312 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:01:21.624158 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:01:21.624393 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:01:21.631305 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:01:21.647704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:01:21.665909 systemd[1]: Switching root. Jan 17 12:01:21.713797 systemd-journald[251]: Journal stopped Jan 17 12:01:24.315770 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 17 12:01:24.315917 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:01:24.315961 kernel: SELinux: policy capability open_perms=1 Jan 17 12:01:24.315994 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:01:24.316032 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:01:24.316079 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:01:24.316112 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:01:24.316143 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:01:24.316180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:01:24.316212 kernel: audit: type=1403 audit(1737115282.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:01:24.316269 systemd[1]: Successfully loaded SELinux policy in 68.156ms. Jan 17 12:01:24.316324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.671ms. Jan 17 12:01:24.316360 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:01:24.316393 systemd[1]: Detected virtualization amazon. Jan 17 12:01:24.316426 systemd[1]: Detected architecture arm64. Jan 17 12:01:24.316459 systemd[1]: Detected first boot. Jan 17 12:01:24.316492 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:01:24.316617 zram_generator::config[1485]: No configuration found. Jan 17 12:01:24.317231 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:01:24.319358 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:01:24.319398 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:01:24.319429 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:24.319464 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:01:24.319497 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:01:24.319530 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:01:24.319571 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:01:24.319607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:01:24.319640 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:01:24.319673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:01:24.319706 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:01:24.319740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:24.319774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:24.319804 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:01:24.319839 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:01:24.319877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:01:24.319921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:01:24.319952 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:01:24.319984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:24.320017 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:01:24.320050 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:01:24.320083 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:01:24.320119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:01:24.320151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:24.320184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:01:24.320215 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:01:24.320270 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:01:24.320311 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:01:24.320345 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:01:24.320387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:24.320420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:24.320452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:24.320493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:01:24.320525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:01:24.320557 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:01:24.320590 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:01:24.320620 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:01:24.320660 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:01:24.320692 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:01:24.320726 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:01:24.320764 systemd[1]: Reached target machines.target - Containers. Jan 17 12:01:24.320795 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:01:24.320825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:24.320856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:01:24.320889 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:01:24.320919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:24.320949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:01:24.320983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:01:24.321015 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:01:24.321051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:24.321082 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:01:24.321112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:01:24.321142 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:01:24.321175 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:01:24.321206 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:01:24.321238 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:01:24.325378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:01:24.325425 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:01:24.325466 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:01:24.325497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:01:24.325530 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:01:24.325560 systemd[1]: Stopped verity-setup.service. Jan 17 12:01:24.325591 kernel: fuse: init (API version 7.39) Jan 17 12:01:24.325624 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:01:24.325657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:01:24.325687 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:01:24.325722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:01:24.325752 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:01:24.325783 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:01:24.325815 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:24.325846 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:01:24.325887 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:01:24.325918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:24.325948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:24.325978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:01:24.326008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:01:24.326039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:01:24.326088 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:01:24.326124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:24.326159 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:01:24.326194 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:01:24.326227 kernel: loop: module loaded Jan 17 12:01:24.326364 systemd-journald[1563]: Collecting audit messages is disabled. Jan 17 12:01:24.326421 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:01:24.326457 systemd-journald[1563]: Journal started Jan 17 12:01:24.326507 systemd-journald[1563]: Runtime Journal (/run/log/journal/ec25379a1a96d8e72093beeee1e4e26f) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:01:24.336416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:01:23.735039 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:01:23.807078 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:01:23.807953 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:01:24.361313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:01:24.361389 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:01:24.362585 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:24.362941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:24.366373 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:01:24.369157 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:01:24.371915 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:01:24.420281 kernel: ACPI: bus type drm_connector registered Jan 17 12:01:24.420997 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:01:24.423458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:01:24.431037 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:01:24.433370 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:01:24.439375 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:01:24.450809 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:01:24.463757 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:01:24.466798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:24.483799 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:01:24.490991 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:01:24.493432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:01:24.496671 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:01:24.498876 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:01:24.504597 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:01:24.509444 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:01:24.522740 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:01:24.525328 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:01:24.546658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:24.572144 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:01:24.577090 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:01:24.585902 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:01:24.596211 systemd-journald[1563]: Time spent on flushing to /var/log/journal/ec25379a1a96d8e72093beeee1e4e26f is 121.933ms for 910 entries. Jan 17 12:01:24.596211 systemd-journald[1563]: System Journal (/var/log/journal/ec25379a1a96d8e72093beeee1e4e26f) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:01:24.752993 systemd-journald[1563]: Received client request to flush runtime journal. Jan 17 12:01:24.753098 kernel: loop0: detected capacity change from 0 to 52536 Jan 17 12:01:24.753147 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:01:24.724387 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:01:24.747367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:01:24.769351 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:01:24.775923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:01:24.782166 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:01:24.809340 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 12:01:24.813954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:24.831668 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:01:24.867650 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:01:24.876277 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Jan 17 12:01:24.876319 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Jan 17 12:01:24.897404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:24.932821 kernel: loop2: detected capacity change from 0 to 194512 Jan 17 12:01:25.000298 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 12:01:25.126324 kernel: loop4: detected capacity change from 0 to 52536 Jan 17 12:01:25.144423 kernel: loop5: detected capacity change from 0 to 114328 Jan 17 12:01:25.163306 kernel: loop6: detected capacity change from 0 to 194512 Jan 17 12:01:25.199382 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 12:01:25.212601 (sd-merge)[1640]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:01:25.214134 (sd-merge)[1640]: Merged extensions into '/usr'. Jan 17 12:01:25.222155 systemd[1]: Reloading requested from client PID 1617 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:01:25.222421 systemd[1]: Reloading... Jan 17 12:01:25.425599 zram_generator::config[1666]: No configuration found. Jan 17 12:01:25.784899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:25.901973 systemd[1]: Reloading finished in 678 ms. Jan 17 12:01:25.940194 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:01:25.943556 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:01:25.957610 systemd[1]: Starting ensure-sysext.service... Jan 17 12:01:25.966751 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:01:25.984912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:26.004572 systemd[1]: Reloading requested from client PID 1718 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:01:26.004600 systemd[1]: Reloading... Jan 17 12:01:26.061507 systemd-udevd[1720]: Using default interface naming scheme 'v255'. Jan 17 12:01:26.064625 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:01:26.071691 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:01:26.079687 systemd-tmpfiles[1719]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:01:26.081635 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Jan 17 12:01:26.082485 systemd-tmpfiles[1719]: ACLs are not supported, ignoring. Jan 17 12:01:26.099619 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:01:26.100310 systemd-tmpfiles[1719]: Skipping /boot Jan 17 12:01:26.142385 systemd-tmpfiles[1719]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:01:26.142550 systemd-tmpfiles[1719]: Skipping /boot Jan 17 12:01:26.230297 ldconfig[1612]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:01:26.246537 zram_generator::config[1755]: No configuration found. Jan 17 12:01:26.436239 (udev-worker)[1748]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:26.633802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:26.675296 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1753) Jan 17 12:01:26.796072 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:01:26.796777 systemd[1]: Reloading finished in 789 ms. Jan 17 12:01:26.825614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:26.830340 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:01:26.843430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:26.937356 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:01:26.961196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:01:26.964503 systemd[1]: Finished ensure-sysext.service. Jan 17 12:01:26.976610 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:26.987656 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:01:26.991753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:27.005283 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:01:27.011435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:27.018505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:01:27.025170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:01:27.039555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:27.043701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:27.050776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:01:27.057604 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:01:27.066616 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:01:27.077580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:01:27.079702 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:01:27.088386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:01:27.097567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:27.103975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:27.104308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:27.109341 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:01:27.111641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:01:27.156289 lvm[1920]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:01:27.171286 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:01:27.175980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:01:27.176319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:01:27.179179 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:01:27.183386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:01:27.190605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:01:27.217369 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:01:27.224771 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:27.225182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:27.229824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:01:27.244424 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:01:27.247831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:27.255596 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:01:27.293044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:01:27.305438 lvm[1955]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:01:27.311109 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:01:27.316824 augenrules[1959]: No rules Jan 17 12:01:27.322585 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:27.333492 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:01:27.336829 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:01:27.357018 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:01:27.379219 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:01:27.431184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:27.511824 systemd-networkd[1933]: lo: Link UP Jan 17 12:01:27.511850 systemd-networkd[1933]: lo: Gained carrier Jan 17 12:01:27.514571 systemd-resolved[1934]: Positive Trust Anchors: Jan 17 12:01:27.514789 systemd-networkd[1933]: Enumeration completed Jan 17 12:01:27.515187 systemd-resolved[1934]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:01:27.515380 systemd-resolved[1934]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:01:27.515381 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:01:27.517727 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:27.517747 systemd-networkd[1933]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:01:27.522240 systemd-networkd[1933]: eth0: Link UP Jan 17 12:01:27.522562 systemd-networkd[1933]: eth0: Gained carrier Jan 17 12:01:27.522600 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:27.529499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:01:27.535403 systemd-networkd[1933]: eth0: DHCPv4 address 172.31.30.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:01:27.536051 systemd-resolved[1934]: Defaulting to hostname 'linux'. Jan 17 12:01:27.541417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:01:27.543661 systemd[1]: Reached target network.target - Network. Jan 17 12:01:27.545416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:27.547672 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:01:27.549801 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:01:27.552141 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:01:27.554690 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:01:27.556860 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:01:27.559185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:01:27.561430 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:01:27.561487 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:01:27.563213 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:01:27.566193 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:01:27.571011 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:01:27.578641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:01:27.581996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:01:27.584690 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:01:27.586593 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:01:27.588409 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:01:27.588462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:01:27.592490 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:01:27.599729 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:01:27.609590 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:01:27.622706 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:01:27.627333 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:01:27.629427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:01:27.636658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:01:27.650560 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:01:27.660516 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:01:27.669508 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:01:27.675780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:01:27.683072 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:01:27.692923 jq[1984]: false Jan 17 12:01:27.697473 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:01:27.701635 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:01:27.702596 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:01:27.741419 extend-filesystems[1985]: Found loop4 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found loop5 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found loop6 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found loop7 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p1 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p2 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p3 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found usr Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p4 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p6 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p7 Jan 17 12:01:27.741419 extend-filesystems[1985]: Found nvme0n1p9 Jan 17 12:01:27.741419 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Jan 17 12:01:27.842874 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:01:27.720408 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:01:27.857631 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Jan 17 12:01:27.728491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:01:27.871720 extend-filesystems[2013]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:01:27.735917 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:01:27.739371 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:01:27.801812 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:01:27.802182 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:01:27.825968 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:01:27.827352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:01:27.906533 jq[1999]: true Jan 17 12:01:27.909133 dbus-daemon[1983]: [system] SELinux support is enabled Jan 17 12:01:27.909472 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:01:27.922760 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1933 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:01:27.916473 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:01:27.916582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:01:27.919239 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:01:27.919802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:01:27.931827 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:01:27.954357 update_engine[1996]: I20250117 12:01:27.932090 1996 main.cc:92] Flatcar Update Engine starting Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: ---------------------------------------------------- Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: corporation. Support and training for ntp-4 are Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: available at https://www.nwtime.org/support Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: ---------------------------------------------------- Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: proto: precision = 0.120 usec (-23) Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: basedate set to 2025-01-05 Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: gps base set to 2025-01-05 (week 2348) Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:01:27.978563 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:01:27.933460 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:01:27.962719 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:01:28.001465 update_engine[1996]: I20250117 12:01:27.960458 1996 update_check_scheduler.cc:74] Next update check in 9m44s Jan 17 12:01:28.001537 extend-filesystems[2013]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:01:28.001537 extend-filesystems[2013]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:01:28.001537 extend-filesystems[2013]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Listen normally on 3 eth0 172.31.30.222:123 Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Listen normally on 4 lo [::1]:123 Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: bind(21) AF_INET6 fe80::401:ccff:fef8:f83%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: unable to create socket on eth0 (5) for fe80::401:ccff:fef8:f83%2#123 Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: failed to init interface for address fe80::401:ccff:fef8:f83%2 Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:28.016277 ntpd[1987]: 17 Jan 12:01:27 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:27.953360 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:01:27.966434 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:01:28.017333 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:01:27.953407 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:01:28.031663 jq[2020]: true Jan 17 12:01:27.970581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:01:27.953429 ntpd[1987]: ---------------------------------------------------- Jan 17 12:01:27.982107 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:01:27.953448 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:01:27.993602 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:01:27.953469 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:01:28.038211 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:01:27.953488 ntpd[1987]: corporation. Support and training for ntp-4 are Jan 17 12:01:28.052550 tar[2007]: linux-arm64/helm Jan 17 12:01:27.953506 ntpd[1987]: available at https://www.nwtime.org/support Jan 17 12:01:27.953524 ntpd[1987]: ---------------------------------------------------- Jan 17 12:01:27.967027 ntpd[1987]: proto: precision = 0.120 usec (-23) Jan 17 12:01:27.970391 ntpd[1987]: basedate set to 2025-01-05 Jan 17 12:01:27.970427 ntpd[1987]: gps base set to 2025-01-05 (week 2348) Jan 17 12:01:27.977173 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:01:27.977284 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:01:27.979495 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:01:27.979575 ntpd[1987]: Listen normally on 3 eth0 172.31.30.222:123 Jan 17 12:01:27.979642 ntpd[1987]: Listen normally on 4 lo [::1]:123 Jan 17 12:01:27.979721 ntpd[1987]: bind(21) AF_INET6 fe80::401:ccff:fef8:f83%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:01:27.979766 ntpd[1987]: unable to create socket on eth0 (5) for fe80::401:ccff:fef8:f83%2#123 Jan 17 12:01:27.979796 ntpd[1987]: failed to init interface for address fe80::401:ccff:fef8:f83%2 Jan 17 12:01:27.979854 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Jan 17 12:01:27.989920 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:27.989971 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:28.078917 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:01:28.085140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:01:28.108223 coreos-metadata[1982]: Jan 17 12:01:28.105 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:01:28.108223 coreos-metadata[1982]: Jan 17 12:01:28.105 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:01:28.112575 coreos-metadata[1982]: Jan 17 12:01:28.112 INFO Fetch successful Jan 17 12:01:28.112575 coreos-metadata[1982]: Jan 17 12:01:28.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:01:28.115685 coreos-metadata[1982]: Jan 17 12:01:28.115 INFO Fetch successful Jan 17 12:01:28.115685 coreos-metadata[1982]: Jan 17 12:01:28.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:01:28.119727 coreos-metadata[1982]: Jan 17 12:01:28.119 INFO Fetch successful Jan 17 12:01:28.119727 coreos-metadata[1982]: Jan 17 12:01:28.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:01:28.119727 coreos-metadata[1982]: Jan 17 12:01:28.119 INFO Fetch successful Jan 17 12:01:28.119727 coreos-metadata[1982]: Jan 17 12:01:28.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:01:28.125133 coreos-metadata[1982]: Jan 17 12:01:28.123 INFO Fetch failed with 404: resource not found Jan 17 12:01:28.125133 coreos-metadata[1982]: Jan 17 12:01:28.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:01:28.125133 coreos-metadata[1982]: Jan 17 12:01:28.124 INFO Fetch successful Jan 17 12:01:28.125133 coreos-metadata[1982]: Jan 17 12:01:28.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:01:28.126360 coreos-metadata[1982]: Jan 17 12:01:28.126 INFO Fetch successful Jan 17 12:01:28.126432 coreos-metadata[1982]: Jan 17 12:01:28.126 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:01:28.127892 coreos-metadata[1982]: Jan 17 12:01:28.127 INFO Fetch successful Jan 17 12:01:28.127892 coreos-metadata[1982]: Jan 17 12:01:28.127 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:01:28.129914 coreos-metadata[1982]: Jan 17 12:01:28.129 INFO Fetch successful Jan 17 12:01:28.129914 coreos-metadata[1982]: Jan 17 12:01:28.129 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:01:28.136317 coreos-metadata[1982]: Jan 17 12:01:28.133 INFO Fetch successful Jan 17 12:01:28.181297 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1787) Jan 17 12:01:28.263764 bash[2063]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:01:28.268188 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:01:28.273137 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:01:28.280005 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:01:28.301664 systemd[1]: Starting sshkeys.service... Jan 17 12:01:28.364507 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:01:28.364577 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 12:01:28.368413 systemd-logind[1994]: New seat seat0. Jan 17 12:01:28.378311 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:01:28.400372 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:01:28.402944 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:01:28.666452 coreos-metadata[2115]: Jan 17 12:01:28.666 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:01:28.669502 coreos-metadata[2115]: Jan 17 12:01:28.667 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:01:28.670421 systemd-networkd[1933]: eth0: Gained IPv6LL Jan 17 12:01:28.678705 coreos-metadata[2115]: Jan 17 12:01:28.676 INFO Fetch successful Jan 17 12:01:28.678705 coreos-metadata[2115]: Jan 17 12:01:28.678 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:01:28.682100 coreos-metadata[2115]: Jan 17 12:01:28.681 INFO Fetch successful Jan 17 12:01:28.685573 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:01:28.688138 unknown[2115]: wrote ssh authorized keys file for user: core Jan 17 12:01:28.690003 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:01:28.709591 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:01:28.714006 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2031 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:01:28.735399 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:01:28.743350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:28.751887 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:01:28.771596 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:01:28.831893 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:01:28.935475 amazon-ssm-agent[2147]: Initializing new seelog logger Jan 17 12:01:28.935475 amazon-ssm-agent[2147]: New Seelog Logger Creation Complete Jan 17 12:01:28.935475 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.935475 amazon-ssm-agent[2147]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.936140 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 processing appconfig overrides Jan 17 12:01:28.939281 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.939281 amazon-ssm-agent[2147]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.939281 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 processing appconfig overrides Jan 17 12:01:28.939281 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.939281 amazon-ssm-agent[2147]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.941564 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 processing appconfig overrides Jan 17 12:01:28.942224 update-ssh-keys[2151]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:01:28.945875 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:01:28.951338 systemd[1]: Finished sshkeys.service. Jan 17 12:01:28.965119 amazon-ssm-agent[2147]: 2025-01-17 12:01:28 INFO Proxy environment variables: Jan 17 12:01:28.967515 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.967515 amazon-ssm-agent[2147]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:28.967669 amazon-ssm-agent[2147]: 2025/01/17 12:01:28 processing appconfig overrides Jan 17 12:01:28.983747 polkitd[2159]: Started polkitd version 121 Jan 17 12:01:28.986767 containerd[2021]: time="2025-01-17T12:01:28.985752000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:01:29.028923 polkitd[2159]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:01:29.029072 polkitd[2159]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:01:29.040217 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:01:29.045695 polkitd[2159]: Finished loading, compiling and executing 2 rules Jan 17 12:01:29.055200 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:01:29.055507 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:01:29.064279 amazon-ssm-agent[2147]: 2025-01-17 12:01:28 INFO https_proxy: Jan 17 12:01:29.064969 polkitd[2159]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:01:29.158330 systemd-hostnamed[2031]: Hostname set to (transient) Jan 17 12:01:29.163395 systemd-resolved[1934]: System hostname changed to 'ip-172-31-30-222'. Jan 17 12:01:29.168091 amazon-ssm-agent[2147]: 2025-01-17 12:01:28 INFO http_proxy: Jan 17 12:01:29.166633 locksmithd[2035]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:01:29.257917 sshd_keygen[2044]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:01:29.269132 amazon-ssm-agent[2147]: 2025-01-17 12:01:28 INFO no_proxy: Jan 17 12:01:29.269320 containerd[2021]: time="2025-01-17T12:01:29.267764614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.279578806Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.279689482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.279735790Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.280099306Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.280141642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.280312078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:29.281156 containerd[2021]: time="2025-01-17T12:01:29.280346254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.285509 containerd[2021]: time="2025-01-17T12:01:29.285398578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:29.285725 containerd[2021]: time="2025-01-17T12:01:29.285694354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.286017 containerd[2021]: time="2025-01-17T12:01:29.285983146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:29.286208 containerd[2021]: time="2025-01-17T12:01:29.286128670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.290513 containerd[2021]: time="2025-01-17T12:01:29.286882846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.290513 containerd[2021]: time="2025-01-17T12:01:29.289091218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:29.290513 containerd[2021]: time="2025-01-17T12:01:29.289593958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:29.290513 containerd[2021]: time="2025-01-17T12:01:29.289657690Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:01:29.290513 containerd[2021]: time="2025-01-17T12:01:29.290002918Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:01:29.290513 containerd[2021]: time="2025-01-17T12:01:29.290161822Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:01:29.297844 containerd[2021]: time="2025-01-17T12:01:29.297038062Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:01:29.297844 containerd[2021]: time="2025-01-17T12:01:29.297186742Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:01:29.297844 containerd[2021]: time="2025-01-17T12:01:29.297335386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:01:29.297844 containerd[2021]: time="2025-01-17T12:01:29.297384274Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:01:29.297844 containerd[2021]: time="2025-01-17T12:01:29.297419386Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:01:29.297844 containerd[2021]: time="2025-01-17T12:01:29.297734926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.300909418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301317550Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301374142Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301420786Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301456426Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301490026Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301521514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301556470Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301593334Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301631062Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301668814Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301698922Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301751674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.302961 containerd[2021]: time="2025-01-17T12:01:29.301790242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.301847710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.301886662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.301918846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.301954738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.301987786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302024398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302058790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302094130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302124070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302159446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302195026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302275390Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302327698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302358406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.303716 containerd[2021]: time="2025-01-17T12:01:29.302386150Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304326670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304545322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304576114Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304605766Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304631482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304663114Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304690426Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:01:29.306407 containerd[2021]: time="2025-01-17T12:01:29.304725322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:01:29.306899 containerd[2021]: time="2025-01-17T12:01:29.305998102Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:01:29.306899 containerd[2021]: time="2025-01-17T12:01:29.306131542Z" level=info msg="Connect containerd service" Jan 17 12:01:29.306899 containerd[2021]: time="2025-01-17T12:01:29.306204094Z" level=info msg="using legacy CRI server" Jan 17 12:01:29.306899 containerd[2021]: time="2025-01-17T12:01:29.306222238Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:01:29.306899 containerd[2021]: time="2025-01-17T12:01:29.306459658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:01:29.310111 containerd[2021]: time="2025-01-17T12:01:29.310033990Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:01:29.313208 containerd[2021]: time="2025-01-17T12:01:29.312435694Z" level=info msg="Start subscribing containerd event" Jan 17 12:01:29.313208 containerd[2021]: time="2025-01-17T12:01:29.312571126Z" level=info msg="Start recovering state" Jan 17 12:01:29.313208 containerd[2021]: time="2025-01-17T12:01:29.312728278Z" level=info msg="Start event monitor" Jan 17 12:01:29.313208 containerd[2021]: time="2025-01-17T12:01:29.312762850Z" level=info msg="Start snapshots syncer" Jan 17 12:01:29.313208 containerd[2021]: time="2025-01-17T12:01:29.312785842Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:01:29.313208 containerd[2021]: time="2025-01-17T12:01:29.312806638Z" level=info msg="Start streaming server" Jan 17 12:01:29.321317 containerd[2021]: time="2025-01-17T12:01:29.315672274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:01:29.321317 containerd[2021]: time="2025-01-17T12:01:29.315795814Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:01:29.321317 containerd[2021]: time="2025-01-17T12:01:29.315917614Z" level=info msg="containerd successfully booted in 0.341833s" Jan 17 12:01:29.316095 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:01:29.365836 amazon-ssm-agent[2147]: 2025-01-17 12:01:28 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:01:29.424906 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:01:29.440222 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:01:29.456991 systemd[1]: Started sshd@0-172.31.30.222:22-139.178.68.195:41944.service - OpenSSH per-connection server daemon (139.178.68.195:41944). Jan 17 12:01:29.464669 amazon-ssm-agent[2147]: 2025-01-17 12:01:28 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:01:29.501779 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:01:29.502262 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:01:29.520043 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:01:29.565047 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO Agent will take identity from EC2 Jan 17 12:01:29.609522 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:01:29.622132 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:01:29.632989 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:01:29.636902 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:01:29.663155 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:29.762051 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:29.784635 sshd[2213]: Accepted publickey for core from 139.178.68.195 port 41944 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:29.795604 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:29.822094 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:01:29.838070 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:01:29.858368 systemd-logind[1994]: New session 1 of user core. Jan 17 12:01:29.862423 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:29.886628 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:01:29.905985 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:01:29.928058 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:01:29.961662 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:01:30.062447 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 12:01:30.165848 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:01:30.198850 systemd[2226]: Queued start job for default target default.target. Jan 17 12:01:30.206118 systemd[2226]: Created slice app.slice - User Application Slice. Jan 17 12:01:30.207212 systemd[2226]: Reached target paths.target - Paths. Jan 17 12:01:30.207272 systemd[2226]: Reached target timers.target - Timers. Jan 17 12:01:30.212032 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:01:30.255323 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:01:30.255700 systemd[2226]: Reached target sockets.target - Sockets. Jan 17 12:01:30.255732 systemd[2226]: Reached target basic.target - Basic System. Jan 17 12:01:30.256122 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:01:30.256694 systemd[2226]: Reached target default.target - Main User Target. Jan 17 12:01:30.256782 systemd[2226]: Startup finished in 309ms. Jan 17 12:01:30.266829 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:01:30.270167 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:01:30.347818 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [Registrar] Starting registrar module Jan 17 12:01:30.347818 amazon-ssm-agent[2147]: 2025-01-17 12:01:29 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:01:30.347818 amazon-ssm-agent[2147]: 2025-01-17 12:01:30 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:01:30.347818 amazon-ssm-agent[2147]: 2025-01-17 12:01:30 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:01:30.347818 amazon-ssm-agent[2147]: 2025-01-17 12:01:30 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:01:30.347818 amazon-ssm-agent[2147]: 2025-01-17 12:01:30 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:01:30.355995 tar[2007]: linux-arm64/LICENSE Jan 17 12:01:30.355995 tar[2007]: linux-arm64/README.md Jan 17 12:01:30.370264 amazon-ssm-agent[2147]: 2025-01-17 12:01:30 INFO [CredentialRefresher] Next credential rotation will be in 31.3999898392 minutes Jan 17 12:01:30.391082 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:01:30.438882 systemd[1]: Started sshd@1-172.31.30.222:22-139.178.68.195:41302.service - OpenSSH per-connection server daemon (139.178.68.195:41302). Jan 17 12:01:30.628188 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 41302 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:30.631528 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:30.639950 systemd-logind[1994]: New session 2 of user core. Jan 17 12:01:30.657563 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:01:30.788046 sshd[2240]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:30.795769 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:01:30.797562 systemd[1]: sshd@1-172.31.30.222:22-139.178.68.195:41302.service: Deactivated successfully. Jan 17 12:01:30.803852 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:01:30.805964 systemd-logind[1994]: Removed session 2. Jan 17 12:01:30.824010 systemd[1]: Started sshd@2-172.31.30.222:22-139.178.68.195:41314.service - OpenSSH per-connection server daemon (139.178.68.195:41314). Jan 17 12:01:30.876547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:30.880130 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:01:30.884817 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:30.885550 systemd[1]: Startup finished in 1.173s (kernel) + 8.613s (initrd) + 8.523s (userspace) = 18.310s. Jan 17 12:01:30.954176 ntpd[1987]: Listen normally on 6 eth0 [fe80::401:ccff:fef8:f83%2]:123 Jan 17 12:01:30.954677 ntpd[1987]: 17 Jan 12:01:30 ntpd[1987]: Listen normally on 6 eth0 [fe80::401:ccff:fef8:f83%2]:123 Jan 17 12:01:31.008311 sshd[2249]: Accepted publickey for core from 139.178.68.195 port 41314 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:31.011316 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:31.023868 systemd-logind[1994]: New session 3 of user core. Jan 17 12:01:31.032954 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:01:31.161589 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:31.169994 systemd[1]: sshd@2-172.31.30.222:22-139.178.68.195:41314.service: Deactivated successfully. Jan 17 12:01:31.174493 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:01:31.177706 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:01:31.180713 systemd-logind[1994]: Removed session 3. Jan 17 12:01:31.377643 amazon-ssm-agent[2147]: 2025-01-17 12:01:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:01:31.479432 amazon-ssm-agent[2147]: 2025-01-17 12:01:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jan 17 12:01:31.578967 amazon-ssm-agent[2147]: 2025-01-17 12:01:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:01:31.971162 kubelet[2254]: E0117 12:01:31.970891 2254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:31.976441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:31.976807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:31.977609 systemd[1]: kubelet.service: Consumed 1.358s CPU time. Jan 17 12:01:34.502932 systemd-resolved[1934]: Clock change detected. Flushing caches. Jan 17 12:01:40.749085 systemd[1]: Started sshd@3-172.31.30.222:22-139.178.68.195:50212.service - OpenSSH per-connection server daemon (139.178.68.195:50212). Jan 17 12:01:40.924221 sshd[2282]: Accepted publickey for core from 139.178.68.195 port 50212 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:40.926872 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:40.933987 systemd-logind[1994]: New session 4 of user core. Jan 17 12:01:40.946290 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:01:41.071377 sshd[2282]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:41.077995 systemd[1]: sshd@3-172.31.30.222:22-139.178.68.195:50212.service: Deactivated successfully. Jan 17 12:01:41.082016 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:01:41.084891 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:01:41.086591 systemd-logind[1994]: Removed session 4. Jan 17 12:01:41.108574 systemd[1]: Started sshd@4-172.31.30.222:22-139.178.68.195:50220.service - OpenSSH per-connection server daemon (139.178.68.195:50220). Jan 17 12:01:41.283385 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 50220 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:41.286103 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:41.295399 systemd-logind[1994]: New session 5 of user core. Jan 17 12:01:41.302295 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:01:41.417685 sshd[2289]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:41.424419 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:01:41.426200 systemd[1]: sshd@4-172.31.30.222:22-139.178.68.195:50220.service: Deactivated successfully. Jan 17 12:01:41.430604 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:01:41.432261 systemd-logind[1994]: Removed session 5. Jan 17 12:01:41.458511 systemd[1]: Started sshd@5-172.31.30.222:22-139.178.68.195:50224.service - OpenSSH per-connection server daemon (139.178.68.195:50224). Jan 17 12:01:41.594439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:41.603418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:41.625881 sshd[2296]: Accepted publickey for core from 139.178.68.195 port 50224 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:41.628915 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:41.640768 systemd-logind[1994]: New session 6 of user core. Jan 17 12:01:41.646340 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:01:41.780312 sshd[2296]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:41.787268 systemd[1]: sshd@5-172.31.30.222:22-139.178.68.195:50224.service: Deactivated successfully. Jan 17 12:01:41.792877 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:01:41.798011 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:01:41.801216 systemd-logind[1994]: Removed session 6. Jan 17 12:01:41.825283 systemd[1]: Started sshd@6-172.31.30.222:22-139.178.68.195:50228.service - OpenSSH per-connection server daemon (139.178.68.195:50228). Jan 17 12:01:41.918289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:41.918739 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:42.007069 sshd[2306]: Accepted publickey for core from 139.178.68.195 port 50228 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:42.008782 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:42.009868 kubelet[2313]: E0117 12:01:42.009812 2313 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:42.016720 systemd-logind[1994]: New session 7 of user core. Jan 17 12:01:42.018241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:42.018559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:42.026330 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:01:42.156246 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:01:42.156905 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:42.172729 sudo[2322]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:42.195973 sshd[2306]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:42.201734 systemd[1]: sshd@6-172.31.30.222:22-139.178.68.195:50228.service: Deactivated successfully. Jan 17 12:01:42.205173 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:01:42.208618 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:01:42.210884 systemd-logind[1994]: Removed session 7. Jan 17 12:01:42.234580 systemd[1]: Started sshd@7-172.31.30.222:22-139.178.68.195:50236.service - OpenSSH per-connection server daemon (139.178.68.195:50236). Jan 17 12:01:42.408420 sshd[2327]: Accepted publickey for core from 139.178.68.195 port 50236 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:42.411546 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:42.419096 systemd-logind[1994]: New session 8 of user core. Jan 17 12:01:42.430336 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:01:42.534157 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:01:42.535321 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:42.541890 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:42.552052 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:01:42.552669 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:42.576574 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:42.583396 auditctl[2334]: No rules Jan 17 12:01:42.583001 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:01:42.584117 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:42.594739 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:42.646785 augenrules[2352]: No rules Jan 17 12:01:42.649221 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:42.651544 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:42.675368 sshd[2327]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:42.681349 systemd[1]: sshd@7-172.31.30.222:22-139.178.68.195:50236.service: Deactivated successfully. Jan 17 12:01:42.684676 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:01:42.688265 systemd-logind[1994]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:01:42.690271 systemd-logind[1994]: Removed session 8. Jan 17 12:01:42.713582 systemd[1]: Started sshd@8-172.31.30.222:22-139.178.68.195:50242.service - OpenSSH per-connection server daemon (139.178.68.195:50242). Jan 17 12:01:42.876262 sshd[2360]: Accepted publickey for core from 139.178.68.195 port 50242 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:42.878861 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:42.887327 systemd-logind[1994]: New session 9 of user core. Jan 17 12:01:42.897298 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:01:42.998721 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:01:42.999412 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:43.588530 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:01:43.588674 (dockerd)[2379]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:01:44.075238 dockerd[2379]: time="2025-01-17T12:01:44.075119455Z" level=info msg="Starting up" Jan 17 12:01:44.273093 dockerd[2379]: time="2025-01-17T12:01:44.272835080Z" level=info msg="Loading containers: start." Jan 17 12:01:44.487083 kernel: Initializing XFRM netlink socket Jan 17 12:01:44.559955 (udev-worker)[2403]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:44.656876 systemd-networkd[1933]: docker0: Link UP Jan 17 12:01:44.680786 dockerd[2379]: time="2025-01-17T12:01:44.680715838Z" level=info msg="Loading containers: done." Jan 17 12:01:44.701474 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3520070826-merged.mount: Deactivated successfully. Jan 17 12:01:44.708400 dockerd[2379]: time="2025-01-17T12:01:44.708316354Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:01:44.708731 dockerd[2379]: time="2025-01-17T12:01:44.708490186Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:01:44.708731 dockerd[2379]: time="2025-01-17T12:01:44.708679126Z" level=info msg="Daemon has completed initialization" Jan 17 12:01:44.762162 dockerd[2379]: time="2025-01-17T12:01:44.761513518Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:01:44.761848 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:01:45.912292 containerd[2021]: time="2025-01-17T12:01:45.911795808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:01:46.531912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432189959.mount: Deactivated successfully. Jan 17 12:01:47.987632 containerd[2021]: time="2025-01-17T12:01:47.987570134Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=32202457" Jan 17 12:01:47.988809 containerd[2021]: time="2025-01-17T12:01:47.988744130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:47.993860 containerd[2021]: time="2025-01-17T12:01:47.993800691Z" level=info msg="ImageCreate event name:\"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:47.996159 containerd[2021]: time="2025-01-17T12:01:47.996099495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:47.998766 containerd[2021]: time="2025-01-17T12:01:47.998697123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"32199257\" in 2.086816355s" Jan 17 12:01:47.998895 containerd[2021]: time="2025-01-17T12:01:47.998766003Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\"" Jan 17 12:01:48.037635 containerd[2021]: time="2025-01-17T12:01:48.037586831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:01:49.801064 containerd[2021]: time="2025-01-17T12:01:49.800271963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:49.804014 containerd[2021]: time="2025-01-17T12:01:49.803942884Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=29381102" Jan 17 12:01:49.805321 containerd[2021]: time="2025-01-17T12:01:49.805269784Z" level=info msg="ImageCreate event name:\"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:49.813007 containerd[2021]: time="2025-01-17T12:01:49.812929120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:49.816995 containerd[2021]: time="2025-01-17T12:01:49.816501412Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"30784892\" in 1.778595993s" Jan 17 12:01:49.816995 containerd[2021]: time="2025-01-17T12:01:49.816562960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\"" Jan 17 12:01:49.856456 containerd[2021]: time="2025-01-17T12:01:49.856387408Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:01:50.957515 containerd[2021]: time="2025-01-17T12:01:50.957040865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.959144 containerd[2021]: time="2025-01-17T12:01:50.959087465Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=15765672" Jan 17 12:01:50.959821 containerd[2021]: time="2025-01-17T12:01:50.959720225Z" level=info msg="ImageCreate event name:\"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.965493 containerd[2021]: time="2025-01-17T12:01:50.965388773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.968019 containerd[2021]: time="2025-01-17T12:01:50.967752557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"17169480\" in 1.111299713s" Jan 17 12:01:50.968019 containerd[2021]: time="2025-01-17T12:01:50.967828697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\"" Jan 17 12:01:51.010712 containerd[2021]: time="2025-01-17T12:01:51.010650241Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:01:52.120293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:01:52.129237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:52.336896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939238651.mount: Deactivated successfully. Jan 17 12:01:52.491361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:52.502742 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:52.607079 kubelet[2614]: E0117 12:01:52.606572 2614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:52.612547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:52.613841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:52.952019 containerd[2021]: time="2025-01-17T12:01:52.950684095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:52.953131 containerd[2021]: time="2025-01-17T12:01:52.953079007Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=25274682" Jan 17 12:01:52.954644 containerd[2021]: time="2025-01-17T12:01:52.954566599Z" level=info msg="ImageCreate event name:\"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:52.958236 containerd[2021]: time="2025-01-17T12:01:52.958122511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:52.959928 containerd[2021]: time="2025-01-17T12:01:52.959708011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"25273701\" in 1.94898895s" Jan 17 12:01:52.959928 containerd[2021]: time="2025-01-17T12:01:52.959781943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\"" Jan 17 12:01:52.999469 containerd[2021]: time="2025-01-17T12:01:52.999079975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:01:53.559110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735256700.mount: Deactivated successfully. Jan 17 12:01:54.694828 containerd[2021]: time="2025-01-17T12:01:54.694471940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:54.697565 containerd[2021]: time="2025-01-17T12:01:54.697436048Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:01:54.699320 containerd[2021]: time="2025-01-17T12:01:54.699214928Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:54.705829 containerd[2021]: time="2025-01-17T12:01:54.705761204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:54.709438 containerd[2021]: time="2025-01-17T12:01:54.709150700Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.710005229s" Jan 17 12:01:54.709438 containerd[2021]: time="2025-01-17T12:01:54.709306220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:01:54.749506 containerd[2021]: time="2025-01-17T12:01:54.749199020Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:01:55.250274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487070807.mount: Deactivated successfully. Jan 17 12:01:55.260068 containerd[2021]: time="2025-01-17T12:01:55.258413731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:55.260633 containerd[2021]: time="2025-01-17T12:01:55.260591683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 17 12:01:55.262072 containerd[2021]: time="2025-01-17T12:01:55.262006831Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:55.268375 containerd[2021]: time="2025-01-17T12:01:55.268307755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:55.270176 containerd[2021]: time="2025-01-17T12:01:55.270129787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 520.873395ms" Jan 17 12:01:55.270363 containerd[2021]: time="2025-01-17T12:01:55.270330487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:01:55.309513 containerd[2021]: time="2025-01-17T12:01:55.309458335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:01:55.878876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921087434.mount: Deactivated successfully. Jan 17 12:01:58.337656 containerd[2021]: time="2025-01-17T12:01:58.337585846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:58.365817 containerd[2021]: time="2025-01-17T12:01:58.365736694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 17 12:01:58.407338 containerd[2021]: time="2025-01-17T12:01:58.407236270Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:58.448512 containerd[2021]: time="2025-01-17T12:01:58.445844194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:58.448674 containerd[2021]: time="2025-01-17T12:01:58.448569910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.139050891s" Jan 17 12:01:58.448674 containerd[2021]: time="2025-01-17T12:01:58.448625998Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 17 12:01:58.720940 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:02:02.620342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:02:02.629431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:02.941318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:02.956922 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:03.045069 kubelet[2797]: E0117 12:02:03.044105 2797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:03.049406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:03.049970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:05.374446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:05.382552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:05.424958 systemd[1]: Reloading requested from client PID 2811 ('systemctl') (unit session-9.scope)... Jan 17 12:02:05.424994 systemd[1]: Reloading... Jan 17 12:02:05.674177 zram_generator::config[2855]: No configuration found. Jan 17 12:02:05.907468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:06.078822 systemd[1]: Reloading finished in 653 ms. Jan 17 12:02:06.168297 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:02:06.168851 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:02:06.170189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:06.176549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:06.460147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:06.475588 (kubelet)[2914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:06.561161 kubelet[2914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:06.561161 kubelet[2914]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:06.561161 kubelet[2914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:06.563062 kubelet[2914]: I0117 12:02:06.562083 2914 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:07.339909 kubelet[2914]: I0117 12:02:07.339867 2914 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:02:07.340154 kubelet[2914]: I0117 12:02:07.340133 2914 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:07.340576 kubelet[2914]: I0117 12:02:07.340552 2914 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:02:07.370948 kubelet[2914]: E0117 12:02:07.370895 2914 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.371163 kubelet[2914]: I0117 12:02:07.370984 2914 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:07.384964 kubelet[2914]: I0117 12:02:07.384920 2914 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:07.385451 kubelet[2914]: I0117 12:02:07.385422 2914 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:07.385767 kubelet[2914]: I0117 12:02:07.385735 2914 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:02:07.385914 kubelet[2914]: I0117 12:02:07.385780 2914 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:07.385914 kubelet[2914]: I0117 12:02:07.385802 2914 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:02:07.386047 kubelet[2914]: I0117 12:02:07.385981 2914 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:07.391003 kubelet[2914]: I0117 12:02:07.390935 2914 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:02:07.391003 kubelet[2914]: I0117 12:02:07.390990 2914 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:07.393095 kubelet[2914]: I0117 12:02:07.391068 2914 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:02:07.393095 kubelet[2914]: I0117 12:02:07.391105 2914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:07.397699 kubelet[2914]: I0117 12:02:07.397648 2914 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:07.398640 kubelet[2914]: I0117 12:02:07.398601 2914 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:07.400741 kubelet[2914]: W0117 12:02:07.400696 2914 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:02:07.402381 kubelet[2914]: I0117 12:02:07.402340 2914 server.go:1256] "Started kubelet" Jan 17 12:02:07.402864 kubelet[2914]: W0117 12:02:07.402798 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.403060 kubelet[2914]: E0117 12:02:07.403013 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.403364 kubelet[2914]: W0117 12:02:07.403316 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.403510 kubelet[2914]: E0117 12:02:07.403488 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.415535 kubelet[2914]: I0117 12:02:07.415498 2914 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:07.416250 kubelet[2914]: I0117 12:02:07.416198 2914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:07.416719 kubelet[2914]: I0117 12:02:07.416686 2914 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:07.419436 kubelet[2914]: I0117 12:02:07.418355 2914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:07.419953 kubelet[2914]: E0117 12:02:07.419921 2914 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.222:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-222.181b792fc7b3366b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-222,UID:ip-172-31-30-222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-222,},FirstTimestamp:2025-01-17 12:02:07.402292843 +0000 UTC m=+0.919550058,LastTimestamp:2025-01-17 12:02:07.402292843 +0000 UTC m=+0.919550058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-222,}" Jan 17 12:02:07.425086 kubelet[2914]: I0117 12:02:07.424960 2914 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:02:07.431066 kubelet[2914]: I0117 12:02:07.429084 2914 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:02:07.431066 kubelet[2914]: E0117 12:02:07.430723 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="200ms" Jan 17 12:02:07.431066 kubelet[2914]: I0117 12:02:07.430780 2914 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:02:07.432573 kubelet[2914]: W0117 12:02:07.432503 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.432783 kubelet[2914]: E0117 12:02:07.432761 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.433475 kubelet[2914]: I0117 12:02:07.433439 2914 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:07.433781 kubelet[2914]: I0117 12:02:07.433748 2914 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:07.435467 kubelet[2914]: I0117 12:02:07.435413 2914 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:02:07.437859 kubelet[2914]: E0117 12:02:07.437820 2914 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:07.438987 kubelet[2914]: I0117 12:02:07.438951 2914 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:07.449075 kubelet[2914]: I0117 12:02:07.448978 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:07.452331 kubelet[2914]: I0117 12:02:07.452275 2914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:07.452331 kubelet[2914]: I0117 12:02:07.452324 2914 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:07.452516 kubelet[2914]: I0117 12:02:07.452357 2914 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:02:07.452516 kubelet[2914]: E0117 12:02:07.452432 2914 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:07.467017 kubelet[2914]: W0117 12:02:07.466927 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.467017 kubelet[2914]: E0117 12:02:07.467045 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:07.480654 kubelet[2914]: I0117 12:02:07.480576 2914 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:07.480654 kubelet[2914]: I0117 12:02:07.480654 2914 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:07.480848 kubelet[2914]: I0117 12:02:07.480692 2914 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:07.485897 kubelet[2914]: I0117 12:02:07.485840 2914 policy_none.go:49] "None policy: Start" Jan 17 12:02:07.487280 kubelet[2914]: I0117 12:02:07.487248 2914 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:07.488070 kubelet[2914]: I0117 12:02:07.487572 2914 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:07.500170 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:02:07.513359 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:02:07.520104 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:02:07.533845 kubelet[2914]: I0117 12:02:07.532723 2914 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-222" Jan 17 12:02:07.533845 kubelet[2914]: I0117 12:02:07.533104 2914 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:07.533845 kubelet[2914]: E0117 12:02:07.533270 2914 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jan 17 12:02:07.533845 kubelet[2914]: I0117 12:02:07.533475 2914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:07.537582 kubelet[2914]: E0117 12:02:07.537508 2914 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-222\" not found" Jan 17 12:02:07.553263 kubelet[2914]: I0117 12:02:07.553202 2914 topology_manager.go:215] "Topology Admit Handler" podUID="22a0b082dd3cb703042c91d2d6b0ccf9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:07.555789 kubelet[2914]: I0117 12:02:07.555735 2914 topology_manager.go:215] "Topology Admit Handler" podUID="cd24f0ebcdcc8e9143a7e29b61a39b26" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-222" Jan 17 12:02:07.558329 kubelet[2914]: I0117 12:02:07.558085 2914 topology_manager.go:215] "Topology Admit Handler" podUID="f1d63cfb597b95656153f67786ac16c6" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-222" Jan 17 12:02:07.572531 systemd[1]: Created slice kubepods-burstable-pod22a0b082dd3cb703042c91d2d6b0ccf9.slice - libcontainer container kubepods-burstable-pod22a0b082dd3cb703042c91d2d6b0ccf9.slice. Jan 17 12:02:07.596367 systemd[1]: Created slice kubepods-burstable-podcd24f0ebcdcc8e9143a7e29b61a39b26.slice - libcontainer container kubepods-burstable-podcd24f0ebcdcc8e9143a7e29b61a39b26.slice. Jan 17 12:02:07.616969 systemd[1]: Created slice kubepods-burstable-podf1d63cfb597b95656153f67786ac16c6.slice - libcontainer container kubepods-burstable-podf1d63cfb597b95656153f67786ac16c6.slice. Jan 17 12:02:07.631352 kubelet[2914]: E0117 12:02:07.631299 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="400ms" Jan 17 12:02:07.636649 kubelet[2914]: I0117 12:02:07.636586 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1d63cfb597b95656153f67786ac16c6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"f1d63cfb597b95656153f67786ac16c6\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:07.636753 kubelet[2914]: I0117 12:02:07.636669 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:07.636753 kubelet[2914]: I0117 12:02:07.636727 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:07.636868 kubelet[2914]: I0117 12:02:07.636771 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd24f0ebcdcc8e9143a7e29b61a39b26-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-222\" (UID: \"cd24f0ebcdcc8e9143a7e29b61a39b26\") " pod="kube-system/kube-scheduler-ip-172-31-30-222" Jan 17 12:02:07.636868 kubelet[2914]: I0117 12:02:07.636814 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1d63cfb597b95656153f67786ac16c6-ca-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"f1d63cfb597b95656153f67786ac16c6\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:07.636868 kubelet[2914]: I0117 12:02:07.636856 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1d63cfb597b95656153f67786ac16c6-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"f1d63cfb597b95656153f67786ac16c6\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:07.637059 kubelet[2914]: I0117 12:02:07.636905 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:07.637059 kubelet[2914]: I0117 12:02:07.636970 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:07.637159 kubelet[2914]: I0117 12:02:07.637060 2914 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:07.736198 kubelet[2914]: I0117 12:02:07.736135 2914 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-222" Jan 17 12:02:07.736640 kubelet[2914]: E0117 12:02:07.736605 2914 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jan 17 12:02:07.891897 containerd[2021]: time="2025-01-17T12:02:07.891827157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-222,Uid:22a0b082dd3cb703042c91d2d6b0ccf9,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:07.912860 containerd[2021]: time="2025-01-17T12:02:07.912584661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-222,Uid:cd24f0ebcdcc8e9143a7e29b61a39b26,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:07.922530 containerd[2021]: time="2025-01-17T12:02:07.922456041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-222,Uid:f1d63cfb597b95656153f67786ac16c6,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:08.032237 kubelet[2914]: E0117 12:02:08.032183 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="800ms" Jan 17 12:02:08.139213 kubelet[2914]: I0117 12:02:08.139132 2914 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-222" Jan 17 12:02:08.139701 kubelet[2914]: E0117 12:02:08.139650 2914 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jan 17 12:02:08.442217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522462811.mount: Deactivated successfully. Jan 17 12:02:08.458203 containerd[2021]: time="2025-01-17T12:02:08.458123768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:08.460361 containerd[2021]: time="2025-01-17T12:02:08.460288748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:08.462477 containerd[2021]: time="2025-01-17T12:02:08.462381788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:02:08.464460 containerd[2021]: time="2025-01-17T12:02:08.464409500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:08.466614 containerd[2021]: time="2025-01-17T12:02:08.466541228Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:08.468885 containerd[2021]: time="2025-01-17T12:02:08.468611300Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:08.473214 containerd[2021]: time="2025-01-17T12:02:08.472568612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:08.481995 containerd[2021]: time="2025-01-17T12:02:08.481919132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:08.486421 containerd[2021]: time="2025-01-17T12:02:08.486360932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.620307ms" Jan 17 12:02:08.492537 containerd[2021]: time="2025-01-17T12:02:08.492476660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.530475ms" Jan 17 12:02:08.506407 containerd[2021]: time="2025-01-17T12:02:08.506339936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.766259ms" Jan 17 12:02:08.675877 kubelet[2914]: W0117 12:02:08.675711 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.675877 kubelet[2914]: E0117 12:02:08.675782 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-222&limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.683479 containerd[2021]: time="2025-01-17T12:02:08.683060109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:08.684408 containerd[2021]: time="2025-01-17T12:02:08.683962605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:08.684408 containerd[2021]: time="2025-01-17T12:02:08.684003369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:08.684408 containerd[2021]: time="2025-01-17T12:02:08.684216549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:08.691254 containerd[2021]: time="2025-01-17T12:02:08.690925713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:08.691608 containerd[2021]: time="2025-01-17T12:02:08.691396365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:08.691608 containerd[2021]: time="2025-01-17T12:02:08.691457037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:08.693290 containerd[2021]: time="2025-01-17T12:02:08.692111997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:08.694606 containerd[2021]: time="2025-01-17T12:02:08.694444233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:08.694606 containerd[2021]: time="2025-01-17T12:02:08.694547133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:08.696732 containerd[2021]: time="2025-01-17T12:02:08.695014293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:08.698017 containerd[2021]: time="2025-01-17T12:02:08.697495401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:08.724057 kubelet[2914]: W0117 12:02:08.723968 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.724057 kubelet[2914]: E0117 12:02:08.724056 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.736234 systemd[1]: Started cri-containerd-9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af.scope - libcontainer container 9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af. Jan 17 12:02:08.753360 systemd[1]: Started cri-containerd-36fbf827bdbf33454b165bd8f66cc76c403358b570039b0049199ceb191d185b.scope - libcontainer container 36fbf827bdbf33454b165bd8f66cc76c403358b570039b0049199ceb191d185b. Jan 17 12:02:08.766104 systemd[1]: Started cri-containerd-509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c.scope - libcontainer container 509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c. Jan 17 12:02:08.817082 kubelet[2914]: W0117 12:02:08.816239 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.817467 kubelet[2914]: E0117 12:02:08.817361 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.833389 kubelet[2914]: E0117 12:02:08.833320 2914 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": dial tcp 172.31.30.222:6443: connect: connection refused" interval="1.6s" Jan 17 12:02:08.879937 containerd[2021]: time="2025-01-17T12:02:08.879839806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-222,Uid:22a0b082dd3cb703042c91d2d6b0ccf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c\"" Jan 17 12:02:08.885061 containerd[2021]: time="2025-01-17T12:02:08.884474434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-222,Uid:f1d63cfb597b95656153f67786ac16c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"36fbf827bdbf33454b165bd8f66cc76c403358b570039b0049199ceb191d185b\"" Jan 17 12:02:08.894103 containerd[2021]: time="2025-01-17T12:02:08.893795830Z" level=info msg="CreateContainer within sandbox \"509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:02:08.894685 kubelet[2914]: W0117 12:02:08.893914 2914 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.894685 kubelet[2914]: E0117 12:02:08.893995 2914 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.222:6443: connect: connection refused Jan 17 12:02:08.899711 containerd[2021]: time="2025-01-17T12:02:08.898930006Z" level=info msg="CreateContainer within sandbox \"36fbf827bdbf33454b165bd8f66cc76c403358b570039b0049199ceb191d185b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:02:08.910214 containerd[2021]: time="2025-01-17T12:02:08.910144162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-222,Uid:cd24f0ebcdcc8e9143a7e29b61a39b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af\"" Jan 17 12:02:08.918380 containerd[2021]: time="2025-01-17T12:02:08.918296338Z" level=info msg="CreateContainer within sandbox \"9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:02:08.944861 kubelet[2914]: I0117 12:02:08.944730 2914 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-222" Jan 17 12:02:08.946814 kubelet[2914]: E0117 12:02:08.946710 2914 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.222:6443/api/v1/nodes\": dial tcp 172.31.30.222:6443: connect: connection refused" node="ip-172-31-30-222" Jan 17 12:02:08.969317 containerd[2021]: time="2025-01-17T12:02:08.969251303Z" level=info msg="CreateContainer within sandbox \"509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8\"" Jan 17 12:02:08.970599 containerd[2021]: time="2025-01-17T12:02:08.970514135Z" level=info msg="StartContainer for \"36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8\"" Jan 17 12:02:09.011424 containerd[2021]: time="2025-01-17T12:02:09.011331775Z" level=info msg="CreateContainer within sandbox \"9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7\"" Jan 17 12:02:09.013936 containerd[2021]: time="2025-01-17T12:02:09.013872811Z" level=info msg="StartContainer for \"34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7\"" Jan 17 12:02:09.024477 containerd[2021]: time="2025-01-17T12:02:09.024216847Z" level=info msg="CreateContainer within sandbox \"36fbf827bdbf33454b165bd8f66cc76c403358b570039b0049199ceb191d185b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d1afd32bf03bc8e0e6b78dd5378b260e2d5ebcf99fb79240aa92bee4a150423\"" Jan 17 12:02:09.026901 containerd[2021]: time="2025-01-17T12:02:09.026850019Z" level=info msg="StartContainer for \"2d1afd32bf03bc8e0e6b78dd5378b260e2d5ebcf99fb79240aa92bee4a150423\"" Jan 17 12:02:09.042424 systemd[1]: Started cri-containerd-36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8.scope - libcontainer container 36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8. Jan 17 12:02:09.116337 systemd[1]: Started cri-containerd-34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7.scope - libcontainer container 34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7. Jan 17 12:02:09.127368 systemd[1]: Started cri-containerd-2d1afd32bf03bc8e0e6b78dd5378b260e2d5ebcf99fb79240aa92bee4a150423.scope - libcontainer container 2d1afd32bf03bc8e0e6b78dd5378b260e2d5ebcf99fb79240aa92bee4a150423. Jan 17 12:02:09.180613 containerd[2021]: time="2025-01-17T12:02:09.180383696Z" level=info msg="StartContainer for \"36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8\" returns successfully" Jan 17 12:02:09.233999 containerd[2021]: time="2025-01-17T12:02:09.233729984Z" level=info msg="StartContainer for \"2d1afd32bf03bc8e0e6b78dd5378b260e2d5ebcf99fb79240aa92bee4a150423\" returns successfully" Jan 17 12:02:09.274719 containerd[2021]: time="2025-01-17T12:02:09.274639592Z" level=info msg="StartContainer for \"34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7\" returns successfully" Jan 17 12:02:10.552368 kubelet[2914]: I0117 12:02:10.552314 2914 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-222" Jan 17 12:02:12.829395 kubelet[2914]: I0117 12:02:12.829327 2914 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-222" Jan 17 12:02:13.128797 update_engine[1996]: I20250117 12:02:13.128065 1996 update_attempter.cc:509] Updating boot flags... Jan 17 12:02:13.259054 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3205) Jan 17 12:02:13.396997 kubelet[2914]: I0117 12:02:13.396783 2914 apiserver.go:52] "Watching apiserver" Jan 17 12:02:13.432151 kubelet[2914]: I0117 12:02:13.431974 2914 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:02:15.710199 systemd[1]: Reloading requested from client PID 3290 ('systemctl') (unit session-9.scope)... Jan 17 12:02:15.710565 systemd[1]: Reloading... Jan 17 12:02:15.898105 zram_generator::config[3333]: No configuration found. Jan 17 12:02:16.150557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:16.354757 systemd[1]: Reloading finished in 643 ms. Jan 17 12:02:16.436247 kubelet[2914]: I0117 12:02:16.436085 2914 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:16.438430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:16.453213 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:02:16.453710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:16.453805 systemd[1]: kubelet.service: Consumed 1.631s CPU time, 114.6M memory peak, 0B memory swap peak. Jan 17 12:02:16.469113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:16.772349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:16.784636 (kubelet)[3390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:16.899504 kubelet[3390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:16.899504 kubelet[3390]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:16.899504 kubelet[3390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:16.899504 kubelet[3390]: I0117 12:02:16.898960 3390 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:16.914876 kubelet[3390]: I0117 12:02:16.913712 3390 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:02:16.914876 kubelet[3390]: I0117 12:02:16.913762 3390 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:16.914876 kubelet[3390]: I0117 12:02:16.914150 3390 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:02:16.922658 kubelet[3390]: I0117 12:02:16.921380 3390 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:02:16.928446 kubelet[3390]: I0117 12:02:16.927115 3390 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:16.944067 kubelet[3390]: I0117 12:02:16.942860 3390 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:16.944067 kubelet[3390]: I0117 12:02:16.943371 3390 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:16.944067 kubelet[3390]: I0117 12:02:16.943893 3390 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:02:16.944067 kubelet[3390]: I0117 12:02:16.943959 3390 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:16.944067 kubelet[3390]: I0117 12:02:16.943980 3390 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:02:16.944480 kubelet[3390]: I0117 12:02:16.944093 3390 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:16.944480 kubelet[3390]: I0117 12:02:16.944274 3390 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:02:16.944480 kubelet[3390]: I0117 12:02:16.944303 3390 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:16.944480 kubelet[3390]: I0117 12:02:16.944371 3390 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:02:16.944480 kubelet[3390]: I0117 12:02:16.944403 3390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:16.951287 kubelet[3390]: I0117 12:02:16.948691 3390 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:16.951287 kubelet[3390]: I0117 12:02:16.949593 3390 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:16.953434 kubelet[3390]: I0117 12:02:16.953395 3390 server.go:1256] "Started kubelet" Jan 17 12:02:16.960315 kubelet[3390]: I0117 12:02:16.959241 3390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:16.960612 kubelet[3390]: I0117 12:02:16.960556 3390 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:16.960734 kubelet[3390]: I0117 12:02:16.960702 3390 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:16.963172 kubelet[3390]: I0117 12:02:16.962176 3390 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:02:16.969136 kubelet[3390]: E0117 12:02:16.968799 3390 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:16.970618 kubelet[3390]: I0117 12:02:16.970116 3390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:16.970618 kubelet[3390]: I0117 12:02:16.970366 3390 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:02:16.981307 kubelet[3390]: I0117 12:02:16.980931 3390 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:02:16.982053 kubelet[3390]: I0117 12:02:16.981497 3390 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:02:17.028641 kubelet[3390]: I0117 12:02:17.028485 3390 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:17.030602 kubelet[3390]: I0117 12:02:17.030142 3390 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:17.048174 kubelet[3390]: I0117 12:02:17.048126 3390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:17.050488 kubelet[3390]: I0117 12:02:17.050403 3390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:17.050488 kubelet[3390]: I0117 12:02:17.050450 3390 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:17.050674 kubelet[3390]: I0117 12:02:17.050513 3390 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:02:17.050674 kubelet[3390]: E0117 12:02:17.050614 3390 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:17.089801 kubelet[3390]: I0117 12:02:17.088741 3390 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:17.093379 kubelet[3390]: E0117 12:02:17.093329 3390 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 17 12:02:17.094472 kubelet[3390]: I0117 12:02:17.094427 3390 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-222" Jan 17 12:02:17.120413 kubelet[3390]: I0117 12:02:17.120263 3390 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-222" Jan 17 12:02:17.123585 kubelet[3390]: I0117 12:02:17.123454 3390 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-222" Jan 17 12:02:17.153560 kubelet[3390]: E0117 12:02:17.150992 3390 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:02:17.215288 kubelet[3390]: I0117 12:02:17.215252 3390 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:17.215523 kubelet[3390]: I0117 12:02:17.215505 3390 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:17.215688 kubelet[3390]: I0117 12:02:17.215667 3390 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:17.216395 kubelet[3390]: I0117 12:02:17.216355 3390 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:02:17.216560 kubelet[3390]: I0117 12:02:17.216541 3390 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:02:17.216829 kubelet[3390]: I0117 12:02:17.216805 3390 policy_none.go:49] "None policy: Start" Jan 17 12:02:17.221199 kubelet[3390]: I0117 12:02:17.221166 3390 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:17.221461 kubelet[3390]: I0117 12:02:17.221442 3390 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:17.222104 kubelet[3390]: I0117 12:02:17.222071 3390 state_mem.go:75] "Updated machine memory state" Jan 17 12:02:17.237414 kubelet[3390]: I0117 12:02:17.237379 3390 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:17.240829 kubelet[3390]: I0117 12:02:17.240711 3390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:17.352150 kubelet[3390]: I0117 12:02:17.351963 3390 topology_manager.go:215] "Topology Admit Handler" podUID="f1d63cfb597b95656153f67786ac16c6" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-222" Jan 17 12:02:17.357067 kubelet[3390]: I0117 12:02:17.353564 3390 topology_manager.go:215] "Topology Admit Handler" podUID="22a0b082dd3cb703042c91d2d6b0ccf9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:17.357067 kubelet[3390]: I0117 12:02:17.353761 3390 topology_manager.go:215] "Topology Admit Handler" podUID="cd24f0ebcdcc8e9143a7e29b61a39b26" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-222" Jan 17 12:02:17.386564 kubelet[3390]: I0117 12:02:17.386518 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:17.386925 kubelet[3390]: I0117 12:02:17.386848 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd24f0ebcdcc8e9143a7e29b61a39b26-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-222\" (UID: \"cd24f0ebcdcc8e9143a7e29b61a39b26\") " pod="kube-system/kube-scheduler-ip-172-31-30-222" Jan 17 12:02:17.387147 kubelet[3390]: I0117 12:02:17.387125 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1d63cfb597b95656153f67786ac16c6-ca-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"f1d63cfb597b95656153f67786ac16c6\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:17.387353 kubelet[3390]: I0117 12:02:17.387320 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:17.387509 kubelet[3390]: I0117 12:02:17.387477 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:17.387693 kubelet[3390]: I0117 12:02:17.387673 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:17.387843 kubelet[3390]: I0117 12:02:17.387825 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1d63cfb597b95656153f67786ac16c6-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"f1d63cfb597b95656153f67786ac16c6\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:17.388033 kubelet[3390]: I0117 12:02:17.387990 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1d63cfb597b95656153f67786ac16c6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-222\" (UID: \"f1d63cfb597b95656153f67786ac16c6\") " pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:17.388214 kubelet[3390]: I0117 12:02:17.388180 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22a0b082dd3cb703042c91d2d6b0ccf9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-222\" (UID: \"22a0b082dd3cb703042c91d2d6b0ccf9\") " pod="kube-system/kube-controller-manager-ip-172-31-30-222" Jan 17 12:02:17.955677 kubelet[3390]: I0117 12:02:17.955554 3390 apiserver.go:52] "Watching apiserver" Jan 17 12:02:17.981534 kubelet[3390]: I0117 12:02:17.981420 3390 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:02:18.191787 kubelet[3390]: E0117 12:02:18.188689 3390 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-222\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-222" Jan 17 12:02:18.246655 kubelet[3390]: I0117 12:02:18.246507 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-222" podStartSLOduration=1.246436529 podStartE2EDuration="1.246436529s" podCreationTimestamp="2025-01-17 12:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:18.223830401 +0000 UTC m=+1.428608744" watchObservedRunningTime="2025-01-17 12:02:18.246436529 +0000 UTC m=+1.451214872" Jan 17 12:02:18.282295 kubelet[3390]: I0117 12:02:18.282245 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-222" podStartSLOduration=1.282169745 podStartE2EDuration="1.282169745s" podCreationTimestamp="2025-01-17 12:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:18.248488157 +0000 UTC m=+1.453266500" watchObservedRunningTime="2025-01-17 12:02:18.282169745 +0000 UTC m=+1.486948112" Jan 17 12:02:18.333778 kubelet[3390]: I0117 12:02:18.333716 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-222" podStartSLOduration=1.333657209 podStartE2EDuration="1.333657209s" podCreationTimestamp="2025-01-17 12:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:18.284746997 +0000 UTC m=+1.489525364" watchObservedRunningTime="2025-01-17 12:02:18.333657209 +0000 UTC m=+1.538435540" Jan 17 12:02:23.757822 sudo[2363]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:23.781267 sshd[2360]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:23.786799 systemd[1]: sshd@8-172.31.30.222:22-139.178.68.195:50242.service: Deactivated successfully. Jan 17 12:02:23.791485 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:02:23.791831 systemd[1]: session-9.scope: Consumed 10.393s CPU time, 186.8M memory peak, 0B memory swap peak. Jan 17 12:02:23.795364 systemd-logind[1994]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:02:23.797814 systemd-logind[1994]: Removed session 9. Jan 17 12:02:28.652087 kubelet[3390]: I0117 12:02:28.651921 3390 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:02:28.653507 kubelet[3390]: I0117 12:02:28.653249 3390 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:02:28.653647 containerd[2021]: time="2025-01-17T12:02:28.652823524Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:02:29.248328 kubelet[3390]: I0117 12:02:29.247192 3390 topology_manager.go:215] "Topology Admit Handler" podUID="8c5a8e31-8587-4a65-977a-a75e5542b99b" podNamespace="kube-system" podName="kube-proxy-pvvx5" Jan 17 12:02:29.260327 kubelet[3390]: I0117 12:02:29.260274 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c5a8e31-8587-4a65-977a-a75e5542b99b-xtables-lock\") pod \"kube-proxy-pvvx5\" (UID: \"8c5a8e31-8587-4a65-977a-a75e5542b99b\") " pod="kube-system/kube-proxy-pvvx5" Jan 17 12:02:29.260725 kubelet[3390]: I0117 12:02:29.260698 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c5a8e31-8587-4a65-977a-a75e5542b99b-lib-modules\") pod \"kube-proxy-pvvx5\" (UID: \"8c5a8e31-8587-4a65-977a-a75e5542b99b\") " pod="kube-system/kube-proxy-pvvx5" Jan 17 12:02:29.261083 kubelet[3390]: I0117 12:02:29.260897 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jpm\" (UniqueName: \"kubernetes.io/projected/8c5a8e31-8587-4a65-977a-a75e5542b99b-kube-api-access-d7jpm\") pod \"kube-proxy-pvvx5\" (UID: \"8c5a8e31-8587-4a65-977a-a75e5542b99b\") " pod="kube-system/kube-proxy-pvvx5" Jan 17 12:02:29.261083 kubelet[3390]: I0117 12:02:29.261018 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c5a8e31-8587-4a65-977a-a75e5542b99b-kube-proxy\") pod \"kube-proxy-pvvx5\" (UID: \"8c5a8e31-8587-4a65-977a-a75e5542b99b\") " pod="kube-system/kube-proxy-pvvx5" Jan 17 12:02:29.272970 systemd[1]: Created slice kubepods-besteffort-pod8c5a8e31_8587_4a65_977a_a75e5542b99b.slice - libcontainer container kubepods-besteffort-pod8c5a8e31_8587_4a65_977a_a75e5542b99b.slice. Jan 17 12:02:29.375086 kubelet[3390]: E0117 12:02:29.375003 3390 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:02:29.375332 kubelet[3390]: E0117 12:02:29.375310 3390 projected.go:200] Error preparing data for projected volume kube-api-access-d7jpm for pod kube-system/kube-proxy-pvvx5: configmap "kube-root-ca.crt" not found Jan 17 12:02:29.375615 kubelet[3390]: E0117 12:02:29.375574 3390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8c5a8e31-8587-4a65-977a-a75e5542b99b-kube-api-access-d7jpm podName:8c5a8e31-8587-4a65-977a-a75e5542b99b nodeName:}" failed. No retries permitted until 2025-01-17 12:02:29.875523324 +0000 UTC m=+13.080301667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7jpm" (UniqueName: "kubernetes.io/projected/8c5a8e31-8587-4a65-977a-a75e5542b99b-kube-api-access-d7jpm") pod "kube-proxy-pvvx5" (UID: "8c5a8e31-8587-4a65-977a-a75e5542b99b") : configmap "kube-root-ca.crt" not found Jan 17 12:02:29.781019 kubelet[3390]: I0117 12:02:29.779915 3390 topology_manager.go:215] "Topology Admit Handler" podUID="e1401b0f-f8c5-4722-b467-77c23f20e73e" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-842pk" Jan 17 12:02:29.800868 systemd[1]: Created slice kubepods-besteffort-pode1401b0f_f8c5_4722_b467_77c23f20e73e.slice - libcontainer container kubepods-besteffort-pode1401b0f_f8c5_4722_b467_77c23f20e73e.slice. Jan 17 12:02:29.865908 kubelet[3390]: I0117 12:02:29.865757 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstcz\" (UniqueName: \"kubernetes.io/projected/e1401b0f-f8c5-4722-b467-77c23f20e73e-kube-api-access-kstcz\") pod \"tigera-operator-c7ccbd65-842pk\" (UID: \"e1401b0f-f8c5-4722-b467-77c23f20e73e\") " pod="tigera-operator/tigera-operator-c7ccbd65-842pk" Jan 17 12:02:29.866438 kubelet[3390]: I0117 12:02:29.866343 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1401b0f-f8c5-4722-b467-77c23f20e73e-var-lib-calico\") pod \"tigera-operator-c7ccbd65-842pk\" (UID: \"e1401b0f-f8c5-4722-b467-77c23f20e73e\") " pod="tigera-operator/tigera-operator-c7ccbd65-842pk" Jan 17 12:02:30.111175 containerd[2021]: time="2025-01-17T12:02:30.110850568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-842pk,Uid:e1401b0f-f8c5-4722-b467-77c23f20e73e,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:02:30.159820 containerd[2021]: time="2025-01-17T12:02:30.158858140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:30.159820 containerd[2021]: time="2025-01-17T12:02:30.159773248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:30.159820 containerd[2021]: time="2025-01-17T12:02:30.159806836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.160464 containerd[2021]: time="2025-01-17T12:02:30.160185772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.185397 containerd[2021]: time="2025-01-17T12:02:30.185172508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvvx5,Uid:8c5a8e31-8587-4a65-977a-a75e5542b99b,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:30.204381 systemd[1]: Started cri-containerd-567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c.scope - libcontainer container 567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c. Jan 17 12:02:30.243836 containerd[2021]: time="2025-01-17T12:02:30.243635392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:30.243836 containerd[2021]: time="2025-01-17T12:02:30.243731692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:30.243836 containerd[2021]: time="2025-01-17T12:02:30.243759016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.244870 containerd[2021]: time="2025-01-17T12:02:30.244505836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.279376 systemd[1]: Started cri-containerd-bd1132155fa8f52eb74199cc59fcad2c4e3e5dcf17f43c8799b4e059555ee44e.scope - libcontainer container bd1132155fa8f52eb74199cc59fcad2c4e3e5dcf17f43c8799b4e059555ee44e. Jan 17 12:02:30.297458 containerd[2021]: time="2025-01-17T12:02:30.297380993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-842pk,Uid:e1401b0f-f8c5-4722-b467-77c23f20e73e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c\"" Jan 17 12:02:30.303082 containerd[2021]: time="2025-01-17T12:02:30.302984633Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:02:30.339935 containerd[2021]: time="2025-01-17T12:02:30.339878945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvvx5,Uid:8c5a8e31-8587-4a65-977a-a75e5542b99b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd1132155fa8f52eb74199cc59fcad2c4e3e5dcf17f43c8799b4e059555ee44e\"" Jan 17 12:02:30.345678 containerd[2021]: time="2025-01-17T12:02:30.345480581Z" level=info msg="CreateContainer within sandbox \"bd1132155fa8f52eb74199cc59fcad2c4e3e5dcf17f43c8799b4e059555ee44e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:02:30.377861 containerd[2021]: time="2025-01-17T12:02:30.377783897Z" level=info msg="CreateContainer within sandbox \"bd1132155fa8f52eb74199cc59fcad2c4e3e5dcf17f43c8799b4e059555ee44e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd9bb082fa2cfdbb78e3956e62e26645a5ab8e203f61e3affa0b4e04d61507e4\"" Jan 17 12:02:30.379192 containerd[2021]: time="2025-01-17T12:02:30.378972149Z" level=info msg="StartContainer for \"bd9bb082fa2cfdbb78e3956e62e26645a5ab8e203f61e3affa0b4e04d61507e4\"" Jan 17 12:02:30.439363 systemd[1]: Started cri-containerd-bd9bb082fa2cfdbb78e3956e62e26645a5ab8e203f61e3affa0b4e04d61507e4.scope - libcontainer container bd9bb082fa2cfdbb78e3956e62e26645a5ab8e203f61e3affa0b4e04d61507e4. Jan 17 12:02:30.496324 containerd[2021]: time="2025-01-17T12:02:30.495951918Z" level=info msg="StartContainer for \"bd9bb082fa2cfdbb78e3956e62e26645a5ab8e203f61e3affa0b4e04d61507e4\" returns successfully" Jan 17 12:02:34.044360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541945184.mount: Deactivated successfully. Jan 17 12:02:34.681661 containerd[2021]: time="2025-01-17T12:02:34.681362410Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:34.683099 containerd[2021]: time="2025-01-17T12:02:34.682438642Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125976" Jan 17 12:02:34.684391 containerd[2021]: time="2025-01-17T12:02:34.684242038Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:34.688111 containerd[2021]: time="2025-01-17T12:02:34.687965050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:34.689871 containerd[2021]: time="2025-01-17T12:02:34.689683402Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.386373185s" Jan 17 12:02:34.689871 containerd[2021]: time="2025-01-17T12:02:34.689737618Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:02:34.693364 containerd[2021]: time="2025-01-17T12:02:34.693294550Z" level=info msg="CreateContainer within sandbox \"567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:02:34.713581 containerd[2021]: time="2025-01-17T12:02:34.713503487Z" level=info msg="CreateContainer within sandbox \"567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f\"" Jan 17 12:02:34.716305 containerd[2021]: time="2025-01-17T12:02:34.715662983Z" level=info msg="StartContainer for \"125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f\"" Jan 17 12:02:34.766379 systemd[1]: Started cri-containerd-125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f.scope - libcontainer container 125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f. Jan 17 12:02:34.811093 containerd[2021]: time="2025-01-17T12:02:34.810888491Z" level=info msg="StartContainer for \"125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f\" returns successfully" Jan 17 12:02:35.215864 kubelet[3390]: I0117 12:02:35.214334 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pvvx5" podStartSLOduration=6.214270617 podStartE2EDuration="6.214270617s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:31.205546541 +0000 UTC m=+14.410324884" watchObservedRunningTime="2025-01-17 12:02:35.214270617 +0000 UTC m=+18.419048960" Jan 17 12:02:39.617800 kubelet[3390]: I0117 12:02:39.617728 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-842pk" podStartSLOduration=6.2294624259999996 podStartE2EDuration="10.617661171s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="2025-01-17 12:02:30.302207669 +0000 UTC m=+13.506986000" lastFinishedPulling="2025-01-17 12:02:34.690406414 +0000 UTC m=+17.895184745" observedRunningTime="2025-01-17 12:02:35.216309885 +0000 UTC m=+18.421088204" watchObservedRunningTime="2025-01-17 12:02:39.617661171 +0000 UTC m=+22.822439514" Jan 17 12:02:39.619957 kubelet[3390]: I0117 12:02:39.618264 3390 topology_manager.go:215] "Topology Admit Handler" podUID="54e5e250-ec74-4100-a00a-34542ad3ed8d" podNamespace="calico-system" podName="calico-typha-68ddf5ddfc-qtz6m" Jan 17 12:02:39.630898 kubelet[3390]: I0117 12:02:39.630828 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54e5e250-ec74-4100-a00a-34542ad3ed8d-tigera-ca-bundle\") pod \"calico-typha-68ddf5ddfc-qtz6m\" (UID: \"54e5e250-ec74-4100-a00a-34542ad3ed8d\") " pod="calico-system/calico-typha-68ddf5ddfc-qtz6m" Jan 17 12:02:39.631055 kubelet[3390]: I0117 12:02:39.630915 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsp5z\" (UniqueName: \"kubernetes.io/projected/54e5e250-ec74-4100-a00a-34542ad3ed8d-kube-api-access-dsp5z\") pod \"calico-typha-68ddf5ddfc-qtz6m\" (UID: \"54e5e250-ec74-4100-a00a-34542ad3ed8d\") " pod="calico-system/calico-typha-68ddf5ddfc-qtz6m" Jan 17 12:02:39.631055 kubelet[3390]: I0117 12:02:39.630980 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54e5e250-ec74-4100-a00a-34542ad3ed8d-typha-certs\") pod \"calico-typha-68ddf5ddfc-qtz6m\" (UID: \"54e5e250-ec74-4100-a00a-34542ad3ed8d\") " pod="calico-system/calico-typha-68ddf5ddfc-qtz6m" Jan 17 12:02:39.640214 systemd[1]: Created slice kubepods-besteffort-pod54e5e250_ec74_4100_a00a_34542ad3ed8d.slice - libcontainer container kubepods-besteffort-pod54e5e250_ec74_4100_a00a_34542ad3ed8d.slice. Jan 17 12:02:39.865109 kubelet[3390]: I0117 12:02:39.863231 3390 topology_manager.go:215] "Topology Admit Handler" podUID="fc6c1725-f96f-4f53-8c46-40b2fb55b50d" podNamespace="calico-system" podName="calico-node-w4xzf" Jan 17 12:02:39.883257 systemd[1]: Created slice kubepods-besteffort-podfc6c1725_f96f_4f53_8c46_40b2fb55b50d.slice - libcontainer container kubepods-besteffort-podfc6c1725_f96f_4f53_8c46_40b2fb55b50d.slice. Jan 17 12:02:39.933425 kubelet[3390]: I0117 12:02:39.933327 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-lib-modules\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933561 kubelet[3390]: I0117 12:02:39.933483 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-cni-log-dir\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933650 kubelet[3390]: I0117 12:02:39.933561 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-node-certs\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933650 kubelet[3390]: I0117 12:02:39.933620 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-flexvol-driver-host\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933764 kubelet[3390]: I0117 12:02:39.933667 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hdl\" (UniqueName: \"kubernetes.io/projected/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-kube-api-access-m2hdl\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933764 kubelet[3390]: I0117 12:02:39.933712 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-xtables-lock\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933764 kubelet[3390]: I0117 12:02:39.933756 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-tigera-ca-bundle\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933937 kubelet[3390]: I0117 12:02:39.933801 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-cni-net-dir\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933937 kubelet[3390]: I0117 12:02:39.933853 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-var-lib-calico\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.933937 kubelet[3390]: I0117 12:02:39.933896 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-policysync\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.934156 kubelet[3390]: I0117 12:02:39.933938 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-cni-bin-dir\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.934156 kubelet[3390]: I0117 12:02:39.933988 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc6c1725-f96f-4f53-8c46-40b2fb55b50d-var-run-calico\") pod \"calico-node-w4xzf\" (UID: \"fc6c1725-f96f-4f53-8c46-40b2fb55b50d\") " pod="calico-system/calico-node-w4xzf" Jan 17 12:02:39.952275 containerd[2021]: time="2025-01-17T12:02:39.952180613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68ddf5ddfc-qtz6m,Uid:54e5e250-ec74-4100-a00a-34542ad3ed8d,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:40.027584 containerd[2021]: time="2025-01-17T12:02:40.027135445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:40.027584 containerd[2021]: time="2025-01-17T12:02:40.027252073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:40.027584 containerd[2021]: time="2025-01-17T12:02:40.027291685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:40.030687 containerd[2021]: time="2025-01-17T12:02:40.027474841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:40.045624 kubelet[3390]: E0117 12:02:40.045480 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.050102 kubelet[3390]: W0117 12:02:40.046774 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.050394 kubelet[3390]: E0117 12:02:40.050210 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.061136 kubelet[3390]: E0117 12:02:40.061096 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.061363 kubelet[3390]: W0117 12:02:40.061330 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.061867 kubelet[3390]: E0117 12:02:40.061841 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.067194 kubelet[3390]: W0117 12:02:40.067147 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.067820 kubelet[3390]: E0117 12:02:40.067787 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.068157 kubelet[3390]: W0117 12:02:40.067957 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.068467 kubelet[3390]: E0117 12:02:40.068440 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.068675 kubelet[3390]: E0117 12:02:40.063936 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.070184 kubelet[3390]: E0117 12:02:40.068612 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.070184 kubelet[3390]: E0117 12:02:40.068631 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.070184 kubelet[3390]: W0117 12:02:40.069522 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.070184 kubelet[3390]: E0117 12:02:40.069623 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.072920 kubelet[3390]: E0117 12:02:40.072387 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.072920 kubelet[3390]: W0117 12:02:40.072422 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.072920 kubelet[3390]: E0117 12:02:40.072456 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.072920 kubelet[3390]: I0117 12:02:40.072647 3390 topology_manager.go:215] "Topology Admit Handler" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" podNamespace="calico-system" podName="csi-node-driver-bgqbn" Jan 17 12:02:40.075246 kubelet[3390]: E0117 12:02:40.074744 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:40.075927 kubelet[3390]: E0117 12:02:40.075625 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.075927 kubelet[3390]: W0117 12:02:40.075671 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.075927 kubelet[3390]: E0117 12:02:40.075706 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.080161 kubelet[3390]: E0117 12:02:40.078620 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.080161 kubelet[3390]: W0117 12:02:40.078660 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.080161 kubelet[3390]: E0117 12:02:40.078708 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.080161 kubelet[3390]: E0117 12:02:40.080069 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.080161 kubelet[3390]: W0117 12:02:40.080100 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.080547 kubelet[3390]: E0117 12:02:40.080449 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.082012 kubelet[3390]: E0117 12:02:40.081728 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.082182 kubelet[3390]: W0117 12:02:40.082089 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.084183 kubelet[3390]: E0117 12:02:40.082614 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.084756 kubelet[3390]: E0117 12:02:40.084690 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.084859 kubelet[3390]: W0117 12:02:40.084754 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.084951 kubelet[3390]: E0117 12:02:40.084910 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.085801 kubelet[3390]: E0117 12:02:40.085739 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.086135 kubelet[3390]: W0117 12:02:40.086094 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.086217 kubelet[3390]: E0117 12:02:40.086182 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.087716 kubelet[3390]: E0117 12:02:40.087668 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.087716 kubelet[3390]: W0117 12:02:40.087703 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.088160 kubelet[3390]: E0117 12:02:40.088086 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.089360 kubelet[3390]: E0117 12:02:40.089321 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.089360 kubelet[3390]: W0117 12:02:40.089353 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.089517 kubelet[3390]: E0117 12:02:40.089388 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.121322 systemd[1]: Started cri-containerd-3d75cff938576a78d5f510d52b052841f04b5f374dfe8d9703f5809b804e10cd.scope - libcontainer container 3d75cff938576a78d5f510d52b052841f04b5f374dfe8d9703f5809b804e10cd. Jan 17 12:02:40.134502 kubelet[3390]: E0117 12:02:40.134230 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.134502 kubelet[3390]: W0117 12:02:40.134386 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.134710 kubelet[3390]: E0117 12:02:40.134534 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.137947 kubelet[3390]: E0117 12:02:40.137195 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.138150 kubelet[3390]: W0117 12:02:40.138101 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.138216 kubelet[3390]: E0117 12:02:40.138172 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.140815 kubelet[3390]: E0117 12:02:40.140757 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.142091 kubelet[3390]: W0117 12:02:40.141247 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.142091 kubelet[3390]: E0117 12:02:40.141877 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.146238 kubelet[3390]: E0117 12:02:40.146174 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.146458 kubelet[3390]: W0117 12:02:40.146222 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.146531 kubelet[3390]: E0117 12:02:40.146463 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.151934 kubelet[3390]: E0117 12:02:40.151873 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.152084 kubelet[3390]: W0117 12:02:40.151928 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.152468 kubelet[3390]: E0117 12:02:40.152406 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.153773 kubelet[3390]: E0117 12:02:40.153733 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.155415 kubelet[3390]: W0117 12:02:40.155345 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.155617 kubelet[3390]: E0117 12:02:40.155579 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.158051 kubelet[3390]: E0117 12:02:40.157892 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.158051 kubelet[3390]: W0117 12:02:40.157930 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.158051 kubelet[3390]: E0117 12:02:40.157982 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.158814 kubelet[3390]: E0117 12:02:40.158769 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.158968 kubelet[3390]: W0117 12:02:40.158942 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.159320 kubelet[3390]: E0117 12:02:40.159097 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.160672 kubelet[3390]: E0117 12:02:40.160634 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.160878 kubelet[3390]: W0117 12:02:40.160851 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.164172 kubelet[3390]: E0117 12:02:40.164133 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.164966 kubelet[3390]: W0117 12:02:40.164505 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.164966 kubelet[3390]: E0117 12:02:40.164555 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.164966 kubelet[3390]: E0117 12:02:40.164624 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.165567 kubelet[3390]: E0117 12:02:40.165534 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.165718 kubelet[3390]: W0117 12:02:40.165679 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.166010 kubelet[3390]: E0117 12:02:40.165835 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.166967 kubelet[3390]: E0117 12:02:40.166657 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.166967 kubelet[3390]: W0117 12:02:40.166693 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.166967 kubelet[3390]: E0117 12:02:40.166744 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.170057 kubelet[3390]: E0117 12:02:40.168916 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.170057 kubelet[3390]: W0117 12:02:40.168969 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.170057 kubelet[3390]: E0117 12:02:40.169008 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.174210 kubelet[3390]: E0117 12:02:40.174171 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.174436 kubelet[3390]: W0117 12:02:40.174407 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.174733 kubelet[3390]: E0117 12:02:40.174709 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.175906 kubelet[3390]: E0117 12:02:40.175865 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.177697 kubelet[3390]: W0117 12:02:40.176188 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.177697 kubelet[3390]: E0117 12:02:40.176239 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.178358 kubelet[3390]: E0117 12:02:40.178309 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.178358 kubelet[3390]: W0117 12:02:40.178349 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.178990 kubelet[3390]: E0117 12:02:40.178390 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.181680 kubelet[3390]: E0117 12:02:40.179719 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.181680 kubelet[3390]: W0117 12:02:40.179761 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.181680 kubelet[3390]: E0117 12:02:40.179801 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.185837 kubelet[3390]: E0117 12:02:40.185515 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.185837 kubelet[3390]: W0117 12:02:40.185554 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.185837 kubelet[3390]: E0117 12:02:40.185594 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.187164 kubelet[3390]: E0117 12:02:40.186865 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.187164 kubelet[3390]: W0117 12:02:40.186915 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.187164 kubelet[3390]: E0117 12:02:40.186951 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.188830 kubelet[3390]: E0117 12:02:40.188558 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.188830 kubelet[3390]: W0117 12:02:40.188593 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.188830 kubelet[3390]: E0117 12:02:40.188630 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.190238 kubelet[3390]: E0117 12:02:40.190194 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.190509 kubelet[3390]: W0117 12:02:40.190477 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.190636 kubelet[3390]: E0117 12:02:40.190614 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.191383 kubelet[3390]: E0117 12:02:40.191348 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.192793 kubelet[3390]: W0117 12:02:40.192407 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.192793 kubelet[3390]: E0117 12:02:40.192463 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.192793 kubelet[3390]: I0117 12:02:40.192558 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b585308-aaed-4559-ba72-c781c44b8b0e-kubelet-dir\") pod \"csi-node-driver-bgqbn\" (UID: \"0b585308-aaed-4559-ba72-c781c44b8b0e\") " pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:40.194383 containerd[2021]: time="2025-01-17T12:02:40.193286966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w4xzf,Uid:fc6c1725-f96f-4f53-8c46-40b2fb55b50d,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:40.195017 kubelet[3390]: E0117 12:02:40.194981 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.196940 kubelet[3390]: W0117 12:02:40.196489 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.196940 kubelet[3390]: E0117 12:02:40.196589 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.196940 kubelet[3390]: I0117 12:02:40.196643 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0b585308-aaed-4559-ba72-c781c44b8b0e-varrun\") pod \"csi-node-driver-bgqbn\" (UID: \"0b585308-aaed-4559-ba72-c781c44b8b0e\") " pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:40.197356 kubelet[3390]: E0117 12:02:40.197135 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.197356 kubelet[3390]: W0117 12:02:40.197157 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.197356 kubelet[3390]: E0117 12:02:40.197205 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.197547 kubelet[3390]: E0117 12:02:40.197528 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.197602 kubelet[3390]: W0117 12:02:40.197545 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.197602 kubelet[3390]: E0117 12:02:40.197570 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.199095 kubelet[3390]: E0117 12:02:40.197856 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.199095 kubelet[3390]: W0117 12:02:40.197886 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.199095 kubelet[3390]: E0117 12:02:40.197932 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.199095 kubelet[3390]: I0117 12:02:40.197983 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b585308-aaed-4559-ba72-c781c44b8b0e-socket-dir\") pod \"csi-node-driver-bgqbn\" (UID: \"0b585308-aaed-4559-ba72-c781c44b8b0e\") " pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:40.199095 kubelet[3390]: E0117 12:02:40.198408 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.199095 kubelet[3390]: W0117 12:02:40.198433 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.199095 kubelet[3390]: E0117 12:02:40.198462 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.199095 kubelet[3390]: I0117 12:02:40.198507 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdww\" (UniqueName: \"kubernetes.io/projected/0b585308-aaed-4559-ba72-c781c44b8b0e-kube-api-access-rcdww\") pod \"csi-node-driver-bgqbn\" (UID: \"0b585308-aaed-4559-ba72-c781c44b8b0e\") " pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:40.199614 kubelet[3390]: E0117 12:02:40.199248 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.199614 kubelet[3390]: W0117 12:02:40.199276 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.199934 kubelet[3390]: E0117 12:02:40.199783 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.199934 kubelet[3390]: W0117 12:02:40.199803 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.201843 kubelet[3390]: E0117 12:02:40.200248 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.201843 kubelet[3390]: E0117 12:02:40.200270 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.201843 kubelet[3390]: W0117 12:02:40.200286 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.201843 kubelet[3390]: E0117 12:02:40.200310 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.201843 kubelet[3390]: I0117 12:02:40.200327 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b585308-aaed-4559-ba72-c781c44b8b0e-registration-dir\") pod \"csi-node-driver-bgqbn\" (UID: \"0b585308-aaed-4559-ba72-c781c44b8b0e\") " pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:40.201843 kubelet[3390]: E0117 12:02:40.200362 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.201843 kubelet[3390]: E0117 12:02:40.200908 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.201843 kubelet[3390]: W0117 12:02:40.200931 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.201843 kubelet[3390]: E0117 12:02:40.200965 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.203542 kubelet[3390]: E0117 12:02:40.202114 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.203542 kubelet[3390]: W0117 12:02:40.202139 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.203542 kubelet[3390]: E0117 12:02:40.202182 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.203542 kubelet[3390]: E0117 12:02:40.203294 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.205150 kubelet[3390]: W0117 12:02:40.203400 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.205150 kubelet[3390]: E0117 12:02:40.203997 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.205544 kubelet[3390]: E0117 12:02:40.205343 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.205544 kubelet[3390]: W0117 12:02:40.205370 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.205544 kubelet[3390]: E0117 12:02:40.205407 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.206622 kubelet[3390]: E0117 12:02:40.205930 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.206622 kubelet[3390]: W0117 12:02:40.205968 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.206622 kubelet[3390]: E0117 12:02:40.205998 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.207302 kubelet[3390]: E0117 12:02:40.206909 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.207302 kubelet[3390]: W0117 12:02:40.206946 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.207302 kubelet[3390]: E0117 12:02:40.207009 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.292087 containerd[2021]: time="2025-01-17T12:02:40.290688434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:40.292087 containerd[2021]: time="2025-01-17T12:02:40.290790422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:40.292087 containerd[2021]: time="2025-01-17T12:02:40.290840282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:40.293308 containerd[2021]: time="2025-01-17T12:02:40.291942374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:40.307704 kubelet[3390]: E0117 12:02:40.306985 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.307704 kubelet[3390]: W0117 12:02:40.307043 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.307704 kubelet[3390]: E0117 12:02:40.307086 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.309077 kubelet[3390]: E0117 12:02:40.308587 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.309077 kubelet[3390]: W0117 12:02:40.308624 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.309077 kubelet[3390]: E0117 12:02:40.308905 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.311547 kubelet[3390]: E0117 12:02:40.311188 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.311547 kubelet[3390]: W0117 12:02:40.311231 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.311547 kubelet[3390]: E0117 12:02:40.311286 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.312122 kubelet[3390]: E0117 12:02:40.311704 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.312122 kubelet[3390]: W0117 12:02:40.311724 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.312122 kubelet[3390]: E0117 12:02:40.311895 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.313600 kubelet[3390]: E0117 12:02:40.313073 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.313600 kubelet[3390]: W0117 12:02:40.313109 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.313600 kubelet[3390]: E0117 12:02:40.313316 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.315206 kubelet[3390]: E0117 12:02:40.314472 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.315206 kubelet[3390]: W0117 12:02:40.314509 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.315206 kubelet[3390]: E0117 12:02:40.315177 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.316913 kubelet[3390]: E0117 12:02:40.316018 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.316913 kubelet[3390]: W0117 12:02:40.316079 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.316913 kubelet[3390]: E0117 12:02:40.316277 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.317473 kubelet[3390]: E0117 12:02:40.317424 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.317473 kubelet[3390]: W0117 12:02:40.317465 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.318050 kubelet[3390]: E0117 12:02:40.317766 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.320283 kubelet[3390]: E0117 12:02:40.320230 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.320283 kubelet[3390]: W0117 12:02:40.320270 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.321288 kubelet[3390]: E0117 12:02:40.321243 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.321567 kubelet[3390]: E0117 12:02:40.321544 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.321567 kubelet[3390]: W0117 12:02:40.321563 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.322296 kubelet[3390]: E0117 12:02:40.321650 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.323246 kubelet[3390]: E0117 12:02:40.323187 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.323246 kubelet[3390]: W0117 12:02:40.323224 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.323246 kubelet[3390]: E0117 12:02:40.323294 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.325947 kubelet[3390]: E0117 12:02:40.325560 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.325947 kubelet[3390]: W0117 12:02:40.325586 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.325947 kubelet[3390]: E0117 12:02:40.325807 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.326229 kubelet[3390]: E0117 12:02:40.326096 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.326229 kubelet[3390]: W0117 12:02:40.326117 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.327070 kubelet[3390]: E0117 12:02:40.326457 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.327070 kubelet[3390]: E0117 12:02:40.326811 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.327070 kubelet[3390]: W0117 12:02:40.326836 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.327070 kubelet[3390]: E0117 12:02:40.327043 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.329631 kubelet[3390]: E0117 12:02:40.329388 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.329631 kubelet[3390]: W0117 12:02:40.329424 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.330010 kubelet[3390]: E0117 12:02:40.329969 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.330984 kubelet[3390]: E0117 12:02:40.330543 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.330984 kubelet[3390]: W0117 12:02:40.330572 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.333658 kubelet[3390]: E0117 12:02:40.331283 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.333995 kubelet[3390]: E0117 12:02:40.333965 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.334160 kubelet[3390]: W0117 12:02:40.334133 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.334344 kubelet[3390]: E0117 12:02:40.334292 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.335956 kubelet[3390]: E0117 12:02:40.335673 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.335956 kubelet[3390]: W0117 12:02:40.335717 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.335956 kubelet[3390]: E0117 12:02:40.335908 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.337831 kubelet[3390]: E0117 12:02:40.337409 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.337831 kubelet[3390]: W0117 12:02:40.337450 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.338497 kubelet[3390]: E0117 12:02:40.338251 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.339858 kubelet[3390]: E0117 12:02:40.339346 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.339858 kubelet[3390]: W0117 12:02:40.339379 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.339858 kubelet[3390]: E0117 12:02:40.339455 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.343956 kubelet[3390]: E0117 12:02:40.343374 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.343956 kubelet[3390]: W0117 12:02:40.343406 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.345052 kubelet[3390]: E0117 12:02:40.344987 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.345463 kubelet[3390]: W0117 12:02:40.345246 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.345860 kubelet[3390]: E0117 12:02:40.345834 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.346009 kubelet[3390]: W0117 12:02:40.345985 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.346625 kubelet[3390]: E0117 12:02:40.346208 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.348780 kubelet[3390]: E0117 12:02:40.345862 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.348780 kubelet[3390]: E0117 12:02:40.345887 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.348780 kubelet[3390]: E0117 12:02:40.348384 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.348780 kubelet[3390]: W0117 12:02:40.348404 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.348780 kubelet[3390]: E0117 12:02:40.348441 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.352336 kubelet[3390]: E0117 12:02:40.352277 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.352336 kubelet[3390]: W0117 12:02:40.352326 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.352537 kubelet[3390]: E0117 12:02:40.352369 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.364437 systemd[1]: Started cri-containerd-ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6.scope - libcontainer container ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6. Jan 17 12:02:40.397937 kubelet[3390]: E0117 12:02:40.397712 3390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:40.397937 kubelet[3390]: W0117 12:02:40.397747 3390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:40.397937 kubelet[3390]: E0117 12:02:40.397785 3390 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:40.449175 containerd[2021]: time="2025-01-17T12:02:40.449121603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w4xzf,Uid:fc6c1725-f96f-4f53-8c46-40b2fb55b50d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\"" Jan 17 12:02:40.467894 containerd[2021]: time="2025-01-17T12:02:40.466270395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:02:40.486561 containerd[2021]: time="2025-01-17T12:02:40.486492699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68ddf5ddfc-qtz6m,Uid:54e5e250-ec74-4100-a00a-34542ad3ed8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d75cff938576a78d5f510d52b052841f04b5f374dfe8d9703f5809b804e10cd\"" Jan 17 12:02:41.746409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3250116175.mount: Deactivated successfully. Jan 17 12:02:41.934189 containerd[2021]: time="2025-01-17T12:02:41.932604318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.937325 containerd[2021]: time="2025-01-17T12:02:41.937267386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 17 12:02:41.941425 containerd[2021]: time="2025-01-17T12:02:41.941361702Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.948082 containerd[2021]: time="2025-01-17T12:02:41.947007222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.949690 containerd[2021]: time="2025-01-17T12:02:41.949594303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.483251656s" Jan 17 12:02:41.949690 containerd[2021]: time="2025-01-17T12:02:41.949667275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:02:41.953377 containerd[2021]: time="2025-01-17T12:02:41.953309335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:02:41.956520 containerd[2021]: time="2025-01-17T12:02:41.956444875Z" level=info msg="CreateContainer within sandbox \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:02:41.997245 containerd[2021]: time="2025-01-17T12:02:41.996983347Z" level=info msg="CreateContainer within sandbox \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c\"" Jan 17 12:02:42.000092 containerd[2021]: time="2025-01-17T12:02:41.998611183Z" level=info msg="StartContainer for \"5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c\"" Jan 17 12:02:42.053809 kubelet[3390]: E0117 12:02:42.051884 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:42.084359 systemd[1]: Started cri-containerd-5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c.scope - libcontainer container 5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c. Jan 17 12:02:42.147740 containerd[2021]: time="2025-01-17T12:02:42.147680811Z" level=info msg="StartContainer for \"5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c\" returns successfully" Jan 17 12:02:42.183821 systemd[1]: cri-containerd-5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c.scope: Deactivated successfully. Jan 17 12:02:42.268740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c-rootfs.mount: Deactivated successfully. Jan 17 12:02:42.390789 containerd[2021]: time="2025-01-17T12:02:42.390699029Z" level=info msg="shim disconnected" id=5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c namespace=k8s.io Jan 17 12:02:42.390789 containerd[2021]: time="2025-01-17T12:02:42.390782753Z" level=warning msg="cleaning up after shim disconnected" id=5a84aeaf8cb8d7aaf91e78a76f437eb6bdaf651bee4bde0235571af41e116a7c namespace=k8s.io Jan 17 12:02:42.392539 containerd[2021]: time="2025-01-17T12:02:42.390808541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:42.426283 containerd[2021]: time="2025-01-17T12:02:42.425448725Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:02:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:02:43.993505 containerd[2021]: time="2025-01-17T12:02:43.993438225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:43.995065 containerd[2021]: time="2025-01-17T12:02:43.994949589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 17 12:02:43.996194 containerd[2021]: time="2025-01-17T12:02:43.996121809Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:43.999937 containerd[2021]: time="2025-01-17T12:02:43.999783597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:44.001663 containerd[2021]: time="2025-01-17T12:02:44.001492457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.048108554s" Jan 17 12:02:44.001663 containerd[2021]: time="2025-01-17T12:02:44.001546961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:02:44.004279 containerd[2021]: time="2025-01-17T12:02:44.003787121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:02:44.040373 containerd[2021]: time="2025-01-17T12:02:44.040298429Z" level=info msg="CreateContainer within sandbox \"3d75cff938576a78d5f510d52b052841f04b5f374dfe8d9703f5809b804e10cd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:02:44.054007 kubelet[3390]: E0117 12:02:44.052496 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:44.068477 containerd[2021]: time="2025-01-17T12:02:44.067158533Z" level=info msg="CreateContainer within sandbox \"3d75cff938576a78d5f510d52b052841f04b5f374dfe8d9703f5809b804e10cd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dd46e72cbef1132d62be01d3754bd0e5e5e1f32b97d165c0ae2623affa1525d4\"" Jan 17 12:02:44.073951 containerd[2021]: time="2025-01-17T12:02:44.071276489Z" level=info msg="StartContainer for \"dd46e72cbef1132d62be01d3754bd0e5e5e1f32b97d165c0ae2623affa1525d4\"" Jan 17 12:02:44.134364 systemd[1]: Started cri-containerd-dd46e72cbef1132d62be01d3754bd0e5e5e1f32b97d165c0ae2623affa1525d4.scope - libcontainer container dd46e72cbef1132d62be01d3754bd0e5e5e1f32b97d165c0ae2623affa1525d4. Jan 17 12:02:44.200345 containerd[2021]: time="2025-01-17T12:02:44.200201526Z" level=info msg="StartContainer for \"dd46e72cbef1132d62be01d3754bd0e5e5e1f32b97d165c0ae2623affa1525d4\" returns successfully" Jan 17 12:02:45.254675 kubelet[3390]: I0117 12:02:45.254604 3390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:46.053712 kubelet[3390]: E0117 12:02:46.051891 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:48.051918 kubelet[3390]: E0117 12:02:48.051865 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:48.780919 containerd[2021]: time="2025-01-17T12:02:48.780474672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:48.783658 containerd[2021]: time="2025-01-17T12:02:48.783594216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:02:48.784882 containerd[2021]: time="2025-01-17T12:02:48.783794292Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:48.792993 containerd[2021]: time="2025-01-17T12:02:48.792084372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:48.794887 containerd[2021]: time="2025-01-17T12:02:48.794827693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.790941764s" Jan 17 12:02:48.795179 containerd[2021]: time="2025-01-17T12:02:48.795144409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:02:48.799506 containerd[2021]: time="2025-01-17T12:02:48.799420993Z" level=info msg="CreateContainer within sandbox \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:02:48.821589 containerd[2021]: time="2025-01-17T12:02:48.821526313Z" level=info msg="CreateContainer within sandbox \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410\"" Jan 17 12:02:48.824253 containerd[2021]: time="2025-01-17T12:02:48.824178061Z" level=info msg="StartContainer for \"fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410\"" Jan 17 12:02:48.895343 systemd[1]: Started cri-containerd-fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410.scope - libcontainer container fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410. Jan 17 12:02:48.948163 containerd[2021]: time="2025-01-17T12:02:48.947722717Z" level=info msg="StartContainer for \"fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410\" returns successfully" Jan 17 12:02:49.307379 kubelet[3390]: I0117 12:02:49.307251 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-68ddf5ddfc-qtz6m" podStartSLOduration=6.795008193 podStartE2EDuration="10.307160699s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="2025-01-17 12:02:40.490349307 +0000 UTC m=+23.695127638" lastFinishedPulling="2025-01-17 12:02:44.002501801 +0000 UTC m=+27.207280144" observedRunningTime="2025-01-17 12:02:44.274080654 +0000 UTC m=+27.478859009" watchObservedRunningTime="2025-01-17 12:02:49.307160699 +0000 UTC m=+32.511939054" Jan 17 12:02:49.813842 containerd[2021]: time="2025-01-17T12:02:49.813616766Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:02:49.817785 systemd[1]: cri-containerd-fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410.scope: Deactivated successfully. Jan 17 12:02:49.836085 kubelet[3390]: I0117 12:02:49.834748 3390 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:02:49.884155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410-rootfs.mount: Deactivated successfully. Jan 17 12:02:49.892614 kubelet[3390]: I0117 12:02:49.892539 3390 topology_manager.go:215] "Topology Admit Handler" podUID="238a61a5-b6b5-4a74-b87d-37070ed73575" podNamespace="kube-system" podName="coredns-76f75df574-m26cv" Jan 17 12:02:49.913561 kubelet[3390]: I0117 12:02:49.911267 3390 topology_manager.go:215] "Topology Admit Handler" podUID="f06e3531-08e0-4afd-9376-50b984ff63bd" podNamespace="kube-system" podName="coredns-76f75df574-wmqjg" Jan 17 12:02:49.916657 kubelet[3390]: I0117 12:02:49.915692 3390 topology_manager.go:215] "Topology Admit Handler" podUID="dfc7f148-4d51-48e8-9fb2-faa63e0fdc30" podNamespace="calico-apiserver" podName="calico-apiserver-5dd85d45b4-n44qp" Jan 17 12:02:49.929078 kubelet[3390]: I0117 12:02:49.928607 3390 topology_manager.go:215] "Topology Admit Handler" podUID="51ad13b0-e571-4bda-9060-30f841760976" podNamespace="calico-system" podName="calico-kube-controllers-7bb958bfbb-cc2sj" Jan 17 12:02:49.935927 kubelet[3390]: I0117 12:02:49.935649 3390 topology_manager.go:215] "Topology Admit Handler" podUID="cf2b9539-4e3c-4e81-a355-422ae8f49174" podNamespace="calico-apiserver" podName="calico-apiserver-5dd85d45b4-xw927" Jan 17 12:02:49.937586 systemd[1]: Created slice kubepods-burstable-pod238a61a5_b6b5_4a74_b87d_37070ed73575.slice - libcontainer container kubepods-burstable-pod238a61a5_b6b5_4a74_b87d_37070ed73575.slice. Jan 17 12:02:49.968395 systemd[1]: Created slice kubepods-besteffort-poddfc7f148_4d51_48e8_9fb2_faa63e0fdc30.slice - libcontainer container kubepods-besteffort-poddfc7f148_4d51_48e8_9fb2_faa63e0fdc30.slice. Jan 17 12:02:50.002153 systemd[1]: Created slice kubepods-burstable-podf06e3531_08e0_4afd_9376_50b984ff63bd.slice - libcontainer container kubepods-burstable-podf06e3531_08e0_4afd_9376_50b984ff63bd.slice. Jan 17 12:02:50.006234 kubelet[3390]: I0117 12:02:50.004252 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvx2\" (UniqueName: \"kubernetes.io/projected/51ad13b0-e571-4bda-9060-30f841760976-kube-api-access-szvx2\") pod \"calico-kube-controllers-7bb958bfbb-cc2sj\" (UID: \"51ad13b0-e571-4bda-9060-30f841760976\") " pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" Jan 17 12:02:50.006234 kubelet[3390]: I0117 12:02:50.004348 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjg5\" (UniqueName: \"kubernetes.io/projected/dfc7f148-4d51-48e8-9fb2-faa63e0fdc30-kube-api-access-ccjg5\") pod \"calico-apiserver-5dd85d45b4-n44qp\" (UID: \"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30\") " pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" Jan 17 12:02:50.006234 kubelet[3390]: I0117 12:02:50.004414 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf2b9539-4e3c-4e81-a355-422ae8f49174-calico-apiserver-certs\") pod \"calico-apiserver-5dd85d45b4-xw927\" (UID: \"cf2b9539-4e3c-4e81-a355-422ae8f49174\") " pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" Jan 17 12:02:50.006234 kubelet[3390]: I0117 12:02:50.004485 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06e3531-08e0-4afd-9376-50b984ff63bd-config-volume\") pod \"coredns-76f75df574-wmqjg\" (UID: \"f06e3531-08e0-4afd-9376-50b984ff63bd\") " pod="kube-system/coredns-76f75df574-wmqjg" Jan 17 12:02:50.006234 kubelet[3390]: I0117 12:02:50.004541 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/238a61a5-b6b5-4a74-b87d-37070ed73575-config-volume\") pod \"coredns-76f75df574-m26cv\" (UID: \"238a61a5-b6b5-4a74-b87d-37070ed73575\") " pod="kube-system/coredns-76f75df574-m26cv" Jan 17 12:02:50.007412 kubelet[3390]: I0117 12:02:50.004588 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ps7\" (UniqueName: \"kubernetes.io/projected/238a61a5-b6b5-4a74-b87d-37070ed73575-kube-api-access-79ps7\") pod \"coredns-76f75df574-m26cv\" (UID: \"238a61a5-b6b5-4a74-b87d-37070ed73575\") " pod="kube-system/coredns-76f75df574-m26cv" Jan 17 12:02:50.007412 kubelet[3390]: I0117 12:02:50.004651 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dfc7f148-4d51-48e8-9fb2-faa63e0fdc30-calico-apiserver-certs\") pod \"calico-apiserver-5dd85d45b4-n44qp\" (UID: \"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30\") " pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" Jan 17 12:02:50.007412 kubelet[3390]: I0117 12:02:50.004718 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wjs\" (UniqueName: \"kubernetes.io/projected/f06e3531-08e0-4afd-9376-50b984ff63bd-kube-api-access-k2wjs\") pod \"coredns-76f75df574-wmqjg\" (UID: \"f06e3531-08e0-4afd-9376-50b984ff63bd\") " pod="kube-system/coredns-76f75df574-wmqjg" Jan 17 12:02:50.007412 kubelet[3390]: I0117 12:02:50.004781 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthz5\" (UniqueName: \"kubernetes.io/projected/cf2b9539-4e3c-4e81-a355-422ae8f49174-kube-api-access-wthz5\") pod \"calico-apiserver-5dd85d45b4-xw927\" (UID: \"cf2b9539-4e3c-4e81-a355-422ae8f49174\") " pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" Jan 17 12:02:50.007412 kubelet[3390]: I0117 12:02:50.004834 3390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51ad13b0-e571-4bda-9060-30f841760976-tigera-ca-bundle\") pod \"calico-kube-controllers-7bb958bfbb-cc2sj\" (UID: \"51ad13b0-e571-4bda-9060-30f841760976\") " pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" Jan 17 12:02:50.033447 systemd[1]: Created slice kubepods-besteffort-pod51ad13b0_e571_4bda_9060_30f841760976.slice - libcontainer container kubepods-besteffort-pod51ad13b0_e571_4bda_9060_30f841760976.slice. Jan 17 12:02:50.056003 systemd[1]: Created slice kubepods-besteffort-podcf2b9539_4e3c_4e81_a355_422ae8f49174.slice - libcontainer container kubepods-besteffort-podcf2b9539_4e3c_4e81_a355_422ae8f49174.slice. Jan 17 12:02:50.082271 systemd[1]: Created slice kubepods-besteffort-pod0b585308_aaed_4559_ba72_c781c44b8b0e.slice - libcontainer container kubepods-besteffort-pod0b585308_aaed_4559_ba72_c781c44b8b0e.slice. Jan 17 12:02:50.087686 containerd[2021]: time="2025-01-17T12:02:50.087116951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgqbn,Uid:0b585308-aaed-4559-ba72-c781c44b8b0e,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:50.255814 containerd[2021]: time="2025-01-17T12:02:50.255328056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m26cv,Uid:238a61a5-b6b5-4a74-b87d-37070ed73575,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:50.298360 containerd[2021]: time="2025-01-17T12:02:50.298286472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-n44qp,Uid:dfc7f148-4d51-48e8-9fb2-faa63e0fdc30,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:02:50.319055 containerd[2021]: time="2025-01-17T12:02:50.318983880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmqjg,Uid:f06e3531-08e0-4afd-9376-50b984ff63bd,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:50.349710 containerd[2021]: time="2025-01-17T12:02:50.349019748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb958bfbb-cc2sj,Uid:51ad13b0-e571-4bda-9060-30f841760976,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:50.367264 containerd[2021]: time="2025-01-17T12:02:50.367191720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-xw927,Uid:cf2b9539-4e3c-4e81-a355-422ae8f49174,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:02:51.212523 containerd[2021]: time="2025-01-17T12:02:51.212426041Z" level=info msg="shim disconnected" id=fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410 namespace=k8s.io Jan 17 12:02:51.212523 containerd[2021]: time="2025-01-17T12:02:51.212504497Z" level=warning msg="cleaning up after shim disconnected" id=fb4dd977b0b6b4cf08be3203f2893dac9b5005721907339b86641276c861d410 namespace=k8s.io Jan 17 12:02:51.212523 containerd[2021]: time="2025-01-17T12:02:51.212524969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:51.566229 containerd[2021]: time="2025-01-17T12:02:51.565125734Z" level=error msg="Failed to destroy network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.573485 containerd[2021]: time="2025-01-17T12:02:51.572755718Z" level=error msg="encountered an error cleaning up failed sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.573485 containerd[2021]: time="2025-01-17T12:02:51.572890814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgqbn,Uid:0b585308-aaed-4559-ba72-c781c44b8b0e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.573712 kubelet[3390]: E0117 12:02:51.573301 3390 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.573712 kubelet[3390]: E0117 12:02:51.573388 3390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:51.573712 kubelet[3390]: E0117 12:02:51.573449 3390 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgqbn" Jan 17 12:02:51.576475 kubelet[3390]: E0117 12:02:51.573544 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bgqbn_calico-system(0b585308-aaed-4559-ba72-c781c44b8b0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bgqbn_calico-system(0b585308-aaed-4559-ba72-c781c44b8b0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:51.599492 containerd[2021]: time="2025-01-17T12:02:51.598491002Z" level=error msg="Failed to destroy network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.601517 containerd[2021]: time="2025-01-17T12:02:51.601443986Z" level=error msg="encountered an error cleaning up failed sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.601974 containerd[2021]: time="2025-01-17T12:02:51.601720538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-xw927,Uid:cf2b9539-4e3c-4e81-a355-422ae8f49174,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.602339 kubelet[3390]: E0117 12:02:51.602250 3390 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.602339 kubelet[3390]: E0117 12:02:51.602334 3390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" Jan 17 12:02:51.602571 kubelet[3390]: E0117 12:02:51.602373 3390 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" Jan 17 12:02:51.602571 kubelet[3390]: E0117 12:02:51.602470 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd85d45b4-xw927_calico-apiserver(cf2b9539-4e3c-4e81-a355-422ae8f49174)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd85d45b4-xw927_calico-apiserver(cf2b9539-4e3c-4e81-a355-422ae8f49174)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" podUID="cf2b9539-4e3c-4e81-a355-422ae8f49174" Jan 17 12:02:51.659400 containerd[2021]: time="2025-01-17T12:02:51.659335203Z" level=error msg="Failed to destroy network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.660270 containerd[2021]: time="2025-01-17T12:02:51.660213951Z" level=error msg="encountered an error cleaning up failed sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.660729 containerd[2021]: time="2025-01-17T12:02:51.660553935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-n44qp,Uid:dfc7f148-4d51-48e8-9fb2-faa63e0fdc30,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.661726 kubelet[3390]: E0117 12:02:51.661152 3390 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.661726 kubelet[3390]: E0117 12:02:51.661245 3390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" Jan 17 12:02:51.661726 kubelet[3390]: E0117 12:02:51.661285 3390 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" Jan 17 12:02:51.662021 kubelet[3390]: E0117 12:02:51.661367 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd85d45b4-n44qp_calico-apiserver(dfc7f148-4d51-48e8-9fb2-faa63e0fdc30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd85d45b4-n44qp_calico-apiserver(dfc7f148-4d51-48e8-9fb2-faa63e0fdc30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" podUID="dfc7f148-4d51-48e8-9fb2-faa63e0fdc30" Jan 17 12:02:51.662155 containerd[2021]: time="2025-01-17T12:02:51.661763859Z" level=error msg="Failed to destroy network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.665511 containerd[2021]: time="2025-01-17T12:02:51.664238991Z" level=error msg="encountered an error cleaning up failed sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.665511 containerd[2021]: time="2025-01-17T12:02:51.664451007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmqjg,Uid:f06e3531-08e0-4afd-9376-50b984ff63bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.665753 kubelet[3390]: E0117 12:02:51.665176 3390 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.665753 kubelet[3390]: E0117 12:02:51.665301 3390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wmqjg" Jan 17 12:02:51.665753 kubelet[3390]: E0117 12:02:51.665346 3390 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-wmqjg" Jan 17 12:02:51.665998 kubelet[3390]: E0117 12:02:51.665446 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wmqjg_kube-system(f06e3531-08e0-4afd-9376-50b984ff63bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wmqjg_kube-system(f06e3531-08e0-4afd-9376-50b984ff63bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wmqjg" podUID="f06e3531-08e0-4afd-9376-50b984ff63bd" Jan 17 12:02:51.674300 containerd[2021]: time="2025-01-17T12:02:51.674124735Z" level=error msg="Failed to destroy network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.675166 containerd[2021]: time="2025-01-17T12:02:51.674952747Z" level=error msg="encountered an error cleaning up failed sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.676118 containerd[2021]: time="2025-01-17T12:02:51.675082959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m26cv,Uid:238a61a5-b6b5-4a74-b87d-37070ed73575,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.676118 containerd[2021]: time="2025-01-17T12:02:51.675416583Z" level=error msg="Failed to destroy network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.676282 kubelet[3390]: E0117 12:02:51.675652 3390 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.676282 kubelet[3390]: E0117 12:02:51.675732 3390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m26cv" Jan 17 12:02:51.676282 kubelet[3390]: E0117 12:02:51.675773 3390 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m26cv" Jan 17 12:02:51.676466 kubelet[3390]: E0117 12:02:51.675864 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m26cv_kube-system(238a61a5-b6b5-4a74-b87d-37070ed73575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m26cv_kube-system(238a61a5-b6b5-4a74-b87d-37070ed73575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m26cv" podUID="238a61a5-b6b5-4a74-b87d-37070ed73575" Jan 17 12:02:51.677401 containerd[2021]: time="2025-01-17T12:02:51.676966683Z" level=error msg="encountered an error cleaning up failed sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.677401 containerd[2021]: time="2025-01-17T12:02:51.677143827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb958bfbb-cc2sj,Uid:51ad13b0-e571-4bda-9060-30f841760976,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.677652 kubelet[3390]: E0117 12:02:51.677576 3390 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:51.677761 kubelet[3390]: E0117 12:02:51.677713 3390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" Jan 17 12:02:51.677861 kubelet[3390]: E0117 12:02:51.677792 3390 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" Jan 17 12:02:51.677972 kubelet[3390]: E0117 12:02:51.677909 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bb958bfbb-cc2sj_calico-system(51ad13b0-e571-4bda-9060-30f841760976)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bb958bfbb-cc2sj_calico-system(51ad13b0-e571-4bda-9060-30f841760976)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" podUID="51ad13b0-e571-4bda-9060-30f841760976" Jan 17 12:02:51.878394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c-shm.mount: Deactivated successfully. Jan 17 12:02:51.878580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8-shm.mount: Deactivated successfully. Jan 17 12:02:52.286575 kubelet[3390]: I0117 12:02:52.285872 3390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:02:52.287806 containerd[2021]: time="2025-01-17T12:02:52.287617826Z" level=info msg="StopPodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\"" Jan 17 12:02:52.288385 containerd[2021]: time="2025-01-17T12:02:52.287941646Z" level=info msg="Ensure that sandbox 4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8 in task-service has been cleanup successfully" Jan 17 12:02:52.290447 kubelet[3390]: I0117 12:02:52.290310 3390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:02:52.293979 containerd[2021]: time="2025-01-17T12:02:52.293248706Z" level=info msg="StopPodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\"" Jan 17 12:02:52.293979 containerd[2021]: time="2025-01-17T12:02:52.293556650Z" level=info msg="Ensure that sandbox 22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05 in task-service has been cleanup successfully" Jan 17 12:02:52.299185 kubelet[3390]: I0117 12:02:52.296431 3390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:02:52.299893 containerd[2021]: time="2025-01-17T12:02:52.299764370Z" level=info msg="StopPodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\"" Jan 17 12:02:52.302875 containerd[2021]: time="2025-01-17T12:02:52.302800622Z" level=info msg="Ensure that sandbox 96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c in task-service has been cleanup successfully" Jan 17 12:02:52.307817 kubelet[3390]: I0117 12:02:52.307769 3390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:02:52.313230 containerd[2021]: time="2025-01-17T12:02:52.313141730Z" level=info msg="StopPodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\"" Jan 17 12:02:52.317389 kubelet[3390]: I0117 12:02:52.314965 3390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:02:52.317588 containerd[2021]: time="2025-01-17T12:02:52.315340034Z" level=info msg="Ensure that sandbox 0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4 in task-service has been cleanup successfully" Jan 17 12:02:52.321715 containerd[2021]: time="2025-01-17T12:02:52.320997350Z" level=info msg="StopPodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\"" Jan 17 12:02:52.322849 containerd[2021]: time="2025-01-17T12:02:52.322544426Z" level=info msg="Ensure that sandbox 7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be in task-service has been cleanup successfully" Jan 17 12:02:52.328246 kubelet[3390]: I0117 12:02:52.328148 3390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:02:52.341004 containerd[2021]: time="2025-01-17T12:02:52.340529102Z" level=info msg="StopPodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\"" Jan 17 12:02:52.344726 containerd[2021]: time="2025-01-17T12:02:52.344665826Z" level=info msg="Ensure that sandbox 499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6 in task-service has been cleanup successfully" Jan 17 12:02:52.420369 containerd[2021]: time="2025-01-17T12:02:52.420304323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:02:52.479986 containerd[2021]: time="2025-01-17T12:02:52.479906163Z" level=error msg="StopPodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" failed" error="failed to destroy network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:52.480504 kubelet[3390]: E0117 12:02:52.480463 3390 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:02:52.480691 kubelet[3390]: E0117 12:02:52.480571 3390 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05"} Jan 17 12:02:52.480691 kubelet[3390]: E0117 12:02:52.480636 3390 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf2b9539-4e3c-4e81-a355-422ae8f49174\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:52.480955 kubelet[3390]: E0117 12:02:52.480700 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf2b9539-4e3c-4e81-a355-422ae8f49174\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" podUID="cf2b9539-4e3c-4e81-a355-422ae8f49174" Jan 17 12:02:52.502234 containerd[2021]: time="2025-01-17T12:02:52.501983163Z" level=error msg="StopPodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" failed" error="failed to destroy network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:52.503578 kubelet[3390]: E0117 12:02:52.503320 3390 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:02:52.503578 kubelet[3390]: E0117 12:02:52.503389 3390 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be"} Jan 17 12:02:52.503578 kubelet[3390]: E0117 12:02:52.503470 3390 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f06e3531-08e0-4afd-9376-50b984ff63bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:52.503578 kubelet[3390]: E0117 12:02:52.503532 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f06e3531-08e0-4afd-9376-50b984ff63bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-wmqjg" podUID="f06e3531-08e0-4afd-9376-50b984ff63bd" Jan 17 12:02:52.528133 containerd[2021]: time="2025-01-17T12:02:52.527752191Z" level=error msg="StopPodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" failed" error="failed to destroy network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:52.529086 kubelet[3390]: E0117 12:02:52.528619 3390 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:02:52.529086 kubelet[3390]: E0117 12:02:52.528686 3390 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6"} Jan 17 12:02:52.529086 kubelet[3390]: E0117 12:02:52.528749 3390 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:52.529086 kubelet[3390]: E0117 12:02:52.528803 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" podUID="dfc7f148-4d51-48e8-9fb2-faa63e0fdc30" Jan 17 12:02:52.529692 containerd[2021]: time="2025-01-17T12:02:52.528979227Z" level=error msg="StopPodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" failed" error="failed to destroy network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:52.529873 kubelet[3390]: E0117 12:02:52.529812 3390 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:02:52.529968 kubelet[3390]: E0117 12:02:52.529874 3390 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8"} Jan 17 12:02:52.529968 kubelet[3390]: E0117 12:02:52.529944 3390 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b585308-aaed-4559-ba72-c781c44b8b0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:52.531085 kubelet[3390]: E0117 12:02:52.529995 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b585308-aaed-4559-ba72-c781c44b8b0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgqbn" podUID="0b585308-aaed-4559-ba72-c781c44b8b0e" Jan 17 12:02:52.546338 containerd[2021]: time="2025-01-17T12:02:52.546126795Z" level=error msg="StopPodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" failed" error="failed to destroy network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:52.546992 kubelet[3390]: E0117 12:02:52.546819 3390 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:02:52.548589 kubelet[3390]: E0117 12:02:52.548175 3390 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4"} Jan 17 12:02:52.548589 kubelet[3390]: E0117 12:02:52.548309 3390 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51ad13b0-e571-4bda-9060-30f841760976\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:52.548589 kubelet[3390]: E0117 12:02:52.548467 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51ad13b0-e571-4bda-9060-30f841760976\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" podUID="51ad13b0-e571-4bda-9060-30f841760976" Jan 17 12:02:52.549397 containerd[2021]: time="2025-01-17T12:02:52.549319167Z" level=error msg="StopPodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" failed" error="failed to destroy network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:52.549930 kubelet[3390]: E0117 12:02:52.549714 3390 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:02:52.549930 kubelet[3390]: E0117 12:02:52.549778 3390 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c"} Jan 17 12:02:52.549930 kubelet[3390]: E0117 12:02:52.549841 3390 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"238a61a5-b6b5-4a74-b87d-37070ed73575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:52.549930 kubelet[3390]: E0117 12:02:52.549897 3390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"238a61a5-b6b5-4a74-b87d-37070ed73575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m26cv" podUID="238a61a5-b6b5-4a74-b87d-37070ed73575" Jan 17 12:02:59.124316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632948922.mount: Deactivated successfully. Jan 17 12:02:59.201450 containerd[2021]: time="2025-01-17T12:02:59.201224648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:59.203361 containerd[2021]: time="2025-01-17T12:02:59.203290940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:02:59.205347 containerd[2021]: time="2025-01-17T12:02:59.205243172Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:59.215101 containerd[2021]: time="2025-01-17T12:02:59.213613688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:59.215328 containerd[2021]: time="2025-01-17T12:02:59.215276396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.794634429s" Jan 17 12:02:59.215482 containerd[2021]: time="2025-01-17T12:02:59.215450432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:02:59.255228 containerd[2021]: time="2025-01-17T12:02:59.252723224Z" level=info msg="CreateContainer within sandbox \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:02:59.296354 containerd[2021]: time="2025-01-17T12:02:59.294097437Z" level=info msg="CreateContainer within sandbox \"ada70273c1f74574caf83776c98af8d15475f7d86c8434c4dc2d8d751cb686b6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"883fae09c1291c91af1e132dcfd8fcda31c4bc5138c081b9a9006c216e474e6f\"" Jan 17 12:02:59.296354 containerd[2021]: time="2025-01-17T12:02:59.295920057Z" level=info msg="StartContainer for \"883fae09c1291c91af1e132dcfd8fcda31c4bc5138c081b9a9006c216e474e6f\"" Jan 17 12:02:59.357115 systemd[1]: Started cri-containerd-883fae09c1291c91af1e132dcfd8fcda31c4bc5138c081b9a9006c216e474e6f.scope - libcontainer container 883fae09c1291c91af1e132dcfd8fcda31c4bc5138c081b9a9006c216e474e6f. Jan 17 12:02:59.423146 containerd[2021]: time="2025-01-17T12:02:59.422393205Z" level=info msg="StartContainer for \"883fae09c1291c91af1e132dcfd8fcda31c4bc5138c081b9a9006c216e474e6f\" returns successfully" Jan 17 12:02:59.550832 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:02:59.550980 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:03:03.053419 containerd[2021]: time="2025-01-17T12:03:03.052909187Z" level=info msg="StopPodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\"" Jan 17 12:03:03.189957 kubelet[3390]: I0117 12:03:03.189893 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-w4xzf" podStartSLOduration=5.438252835 podStartE2EDuration="24.189829512s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="2025-01-17 12:02:40.464262099 +0000 UTC m=+23.669040430" lastFinishedPulling="2025-01-17 12:02:59.215838776 +0000 UTC m=+42.420617107" observedRunningTime="2025-01-17 12:03:00.472459331 +0000 UTC m=+43.677237674" watchObservedRunningTime="2025-01-17 12:03:03.189829512 +0000 UTC m=+46.394607867" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.189 [INFO][4690] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.192 [INFO][4690] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" iface="eth0" netns="/var/run/netns/cni-00822ccc-288c-823c-3e9c-5ad105a48293" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.194 [INFO][4690] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" iface="eth0" netns="/var/run/netns/cni-00822ccc-288c-823c-3e9c-5ad105a48293" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.196 [INFO][4690] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" iface="eth0" netns="/var/run/netns/cni-00822ccc-288c-823c-3e9c-5ad105a48293" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.197 [INFO][4690] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.197 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.240 [INFO][4698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.240 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.240 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.252 [WARNING][4698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.252 [INFO][4698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.257 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:03.266850 containerd[2021]: 2025-01-17 12:03:03.263 [INFO][4690] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:03.270290 containerd[2021]: time="2025-01-17T12:03:03.270132828Z" level=info msg="TearDown network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" successfully" Jan 17 12:03:03.270290 containerd[2021]: time="2025-01-17T12:03:03.270189540Z" level=info msg="StopPodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" returns successfully" Jan 17 12:03:03.272718 containerd[2021]: time="2025-01-17T12:03:03.271670880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-xw927,Uid:cf2b9539-4e3c-4e81-a355-422ae8f49174,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:03:03.273835 systemd[1]: run-netns-cni\x2d00822ccc\x2d288c\x2d823c\x2d3e9c\x2d5ad105a48293.mount: Deactivated successfully. Jan 17 12:03:03.508472 systemd-networkd[1933]: calic359e3322c7: Link UP Jan 17 12:03:03.508943 systemd-networkd[1933]: calic359e3322c7: Gained carrier Jan 17 12:03:03.515527 (udev-worker)[4728]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.345 [INFO][4705] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.368 [INFO][4705] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0 calico-apiserver-5dd85d45b4- calico-apiserver cf2b9539-4e3c-4e81-a355-422ae8f49174 739 0 2025-01-17 12:02:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd85d45b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-222 calico-apiserver-5dd85d45b4-xw927 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic359e3322c7 [] []}} ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.368 [INFO][4705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.426 [INFO][4717] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" HandleID="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.444 [INFO][4717] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" HandleID="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000222b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-222", "pod":"calico-apiserver-5dd85d45b4-xw927", "timestamp":"2025-01-17 12:03:03.426643297 +0000 UTC"}, Hostname:"ip-172-31-30-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.444 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.444 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.444 [INFO][4717] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-222' Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.447 [INFO][4717] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.455 [INFO][4717] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.462 [INFO][4717] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.465 [INFO][4717] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.469 [INFO][4717] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.469 [INFO][4717] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.471 [INFO][4717] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.478 [INFO][4717] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.486 [INFO][4717] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.65/26] block=192.168.94.64/26 handle="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.486 [INFO][4717] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.65/26] handle="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" host="ip-172-31-30-222" Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.486 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:03.582350 containerd[2021]: 2025-01-17 12:03:03.486 [INFO][4717] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.65/26] IPv6=[] ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" HandleID="k8s-pod-network.5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.583623 containerd[2021]: 2025-01-17 12:03:03.491 [INFO][4705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf2b9539-4e3c-4e81-a355-422ae8f49174", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"", Pod:"calico-apiserver-5dd85d45b4-xw927", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic359e3322c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:03.583623 containerd[2021]: 2025-01-17 12:03:03.491 [INFO][4705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.65/32] ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.583623 containerd[2021]: 2025-01-17 12:03:03.491 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic359e3322c7 ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.583623 containerd[2021]: 2025-01-17 12:03:03.511 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.583623 containerd[2021]: 2025-01-17 12:03:03.511 [INFO][4705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf2b9539-4e3c-4e81-a355-422ae8f49174", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a", Pod:"calico-apiserver-5dd85d45b4-xw927", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic359e3322c7", MAC:"56:5d:18:b4:8b:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:03.583623 containerd[2021]: 2025-01-17 12:03:03.546 [INFO][4705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-xw927" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:03.650630 containerd[2021]: time="2025-01-17T12:03:03.650455574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:03.650630 containerd[2021]: time="2025-01-17T12:03:03.650558006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:03.651010 containerd[2021]: time="2025-01-17T12:03:03.650585414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:03.651010 containerd[2021]: time="2025-01-17T12:03:03.650742854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:03.742502 systemd[1]: Started cri-containerd-5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a.scope - libcontainer container 5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a. Jan 17 12:03:03.873654 containerd[2021]: time="2025-01-17T12:03:03.873114687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-xw927,Uid:cf2b9539-4e3c-4e81-a355-422ae8f49174,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a\"" Jan 17 12:03:03.881973 containerd[2021]: time="2025-01-17T12:03:03.881902227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:03:04.053070 containerd[2021]: time="2025-01-17T12:03:04.052991460Z" level=info msg="StopPodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\"" Jan 17 12:03:04.053987 containerd[2021]: time="2025-01-17T12:03:04.053765184Z" level=info msg="StopPodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\"" Jan 17 12:03:04.059560 containerd[2021]: time="2025-01-17T12:03:04.058232904Z" level=info msg="StopPodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\"" Jan 17 12:03:04.275478 systemd[1]: run-containerd-runc-k8s.io-5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a-runc.Vn3cYt.mount: Deactivated successfully. Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.258 [INFO][4840] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.260 [INFO][4840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" iface="eth0" netns="/var/run/netns/cni-4480d540-e42f-f9c7-cf68-bec7c2935f37" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.263 [INFO][4840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" iface="eth0" netns="/var/run/netns/cni-4480d540-e42f-f9c7-cf68-bec7c2935f37" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.265 [INFO][4840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" iface="eth0" netns="/var/run/netns/cni-4480d540-e42f-f9c7-cf68-bec7c2935f37" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.265 [INFO][4840] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.265 [INFO][4840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.344 [INFO][4860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.344 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.344 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.374 [WARNING][4860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.375 [INFO][4860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.379 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:04.391124 containerd[2021]: 2025-01-17 12:03:04.381 [INFO][4840] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:04.391124 containerd[2021]: time="2025-01-17T12:03:04.389175518Z" level=info msg="TearDown network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" successfully" Jan 17 12:03:04.391124 containerd[2021]: time="2025-01-17T12:03:04.389249282Z" level=info msg="StopPodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" returns successfully" Jan 17 12:03:04.395308 systemd[1]: run-netns-cni\x2d4480d540\x2de42f\x2df9c7\x2dcf68\x2dbec7c2935f37.mount: Deactivated successfully. Jan 17 12:03:04.403049 containerd[2021]: time="2025-01-17T12:03:04.400816982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgqbn,Uid:0b585308-aaed-4559-ba72-c781c44b8b0e,Namespace:calico-system,Attempt:1,}" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.247 [INFO][4839] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.248 [INFO][4839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" iface="eth0" netns="/var/run/netns/cni-81bd9d9d-acce-a01d-7777-4d8a893a085c" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.249 [INFO][4839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" iface="eth0" netns="/var/run/netns/cni-81bd9d9d-acce-a01d-7777-4d8a893a085c" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.250 [INFO][4839] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" iface="eth0" netns="/var/run/netns/cni-81bd9d9d-acce-a01d-7777-4d8a893a085c" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.250 [INFO][4839] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.251 [INFO][4839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.418 [INFO][4859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.428 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.428 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.454 [WARNING][4859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.454 [INFO][4859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.458 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:04.472460 containerd[2021]: 2025-01-17 12:03:04.466 [INFO][4839] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:04.478079 containerd[2021]: time="2025-01-17T12:03:04.476192306Z" level=info msg="TearDown network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" successfully" Jan 17 12:03:04.478079 containerd[2021]: time="2025-01-17T12:03:04.476250338Z" level=info msg="StopPodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" returns successfully" Jan 17 12:03:04.480271 systemd[1]: run-netns-cni\x2d81bd9d9d\x2dacce\x2da01d\x2d7777\x2d4d8a893a085c.mount: Deactivated successfully. Jan 17 12:03:04.484998 containerd[2021]: time="2025-01-17T12:03:04.484908326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmqjg,Uid:f06e3531-08e0-4afd-9376-50b984ff63bd,Namespace:kube-system,Attempt:1,}" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.288 [INFO][4834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.289 [INFO][4834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" iface="eth0" netns="/var/run/netns/cni-0089ddf1-f1de-8514-a9d1-1b384429ee54" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.290 [INFO][4834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" iface="eth0" netns="/var/run/netns/cni-0089ddf1-f1de-8514-a9d1-1b384429ee54" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.291 [INFO][4834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" iface="eth0" netns="/var/run/netns/cni-0089ddf1-f1de-8514-a9d1-1b384429ee54" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.291 [INFO][4834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.291 [INFO][4834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.449 [INFO][4866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.449 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.459 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.487 [WARNING][4866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.487 [INFO][4866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.491 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:04.502814 containerd[2021]: 2025-01-17 12:03:04.497 [INFO][4834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:04.504612 containerd[2021]: time="2025-01-17T12:03:04.504164643Z" level=info msg="TearDown network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" successfully" Jan 17 12:03:04.504612 containerd[2021]: time="2025-01-17T12:03:04.504218979Z" level=info msg="StopPodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" returns successfully" Jan 17 12:03:04.505897 containerd[2021]: time="2025-01-17T12:03:04.505847391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-n44qp,Uid:dfc7f148-4d51-48e8-9fb2-faa63e0fdc30,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:03:04.833805 systemd-networkd[1933]: calid7e22fe52c5: Link UP Jan 17 12:03:04.841314 systemd-networkd[1933]: calid7e22fe52c5: Gained carrier Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.528 [INFO][4877] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.562 [INFO][4877] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0 csi-node-driver- calico-system 0b585308-aaed-4559-ba72-c781c44b8b0e 752 0 2025-01-17 12:02:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-222 csi-node-driver-bgqbn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid7e22fe52c5 [] []}} ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.562 [INFO][4877] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.682 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" HandleID="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.721 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" HandleID="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003179f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-222", "pod":"csi-node-driver-bgqbn", "timestamp":"2025-01-17 12:03:04.682273563 +0000 UTC"}, Hostname:"ip-172-31-30-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.721 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.721 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.722 [INFO][4911] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-222' Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.726 [INFO][4911] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.735 [INFO][4911] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.747 [INFO][4911] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.751 [INFO][4911] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.758 [INFO][4911] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.758 [INFO][4911] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.763 [INFO][4911] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335 Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.774 [INFO][4911] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.799 [INFO][4911] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.66/26] block=192.168.94.64/26 handle="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.801 [INFO][4911] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.66/26] handle="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" host="ip-172-31-30-222" Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.801 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:04.884474 containerd[2021]: 2025-01-17 12:03:04.802 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.66/26] IPv6=[] ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" HandleID="k8s-pod-network.429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.885920 containerd[2021]: 2025-01-17 12:03:04.815 [INFO][4877] cni-plugin/k8s.go 386: Populated endpoint ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b585308-aaed-4559-ba72-c781c44b8b0e", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"", Pod:"csi-node-driver-bgqbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7e22fe52c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:04.885920 containerd[2021]: 2025-01-17 12:03:04.815 [INFO][4877] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.66/32] ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.885920 containerd[2021]: 2025-01-17 12:03:04.815 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7e22fe52c5 ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.885920 containerd[2021]: 2025-01-17 12:03:04.845 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.885920 containerd[2021]: 2025-01-17 12:03:04.846 [INFO][4877] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b585308-aaed-4559-ba72-c781c44b8b0e", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335", Pod:"csi-node-driver-bgqbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7e22fe52c5", MAC:"d2:f2:78:3d:f8:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:04.885920 containerd[2021]: 2025-01-17 12:03:04.874 [INFO][4877] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335" Namespace="calico-system" Pod="csi-node-driver-bgqbn" WorkloadEndpoint="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:04.948011 systemd-networkd[1933]: calie488db81b5a: Link UP Jan 17 12:03:04.953120 systemd-networkd[1933]: calie488db81b5a: Gained carrier Jan 17 12:03:04.966055 containerd[2021]: time="2025-01-17T12:03:04.962296757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:04.966055 containerd[2021]: time="2025-01-17T12:03:04.962406809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:04.966055 containerd[2021]: time="2025-01-17T12:03:04.962444585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:04.966055 containerd[2021]: time="2025-01-17T12:03:04.962609513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:04.988596 systemd-networkd[1933]: calic359e3322c7: Gained IPv6LL Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.629 [INFO][4900] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.680 [INFO][4900] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0 calico-apiserver-5dd85d45b4- calico-apiserver dfc7f148-4d51-48e8-9fb2-faa63e0fdc30 753 0 2025-01-17 12:02:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd85d45b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-222 calico-apiserver-5dd85d45b4-n44qp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie488db81b5a [] []}} ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.681 [INFO][4900] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.775 [INFO][4922] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" HandleID="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.826 [INFO][4922] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" HandleID="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-222", "pod":"calico-apiserver-5dd85d45b4-n44qp", "timestamp":"2025-01-17 12:03:04.775605328 +0000 UTC"}, Hostname:"ip-172-31-30-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.826 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.828 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.829 [INFO][4922] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-222' Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.841 [INFO][4922] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.864 [INFO][4922] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.884 [INFO][4922] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.890 [INFO][4922] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.896 [INFO][4922] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.896 [INFO][4922] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.899 [INFO][4922] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54 Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.910 [INFO][4922] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.922 [INFO][4922] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.67/26] block=192.168.94.64/26 handle="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.922 [INFO][4922] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.67/26] handle="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" host="ip-172-31-30-222" Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.923 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:05.011501 containerd[2021]: 2025-01-17 12:03:04.923 [INFO][4922] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.67/26] IPv6=[] ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" HandleID="k8s-pod-network.a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.014267 containerd[2021]: 2025-01-17 12:03:04.935 [INFO][4900] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"", Pod:"calico-apiserver-5dd85d45b4-n44qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie488db81b5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:05.014267 containerd[2021]: 2025-01-17 12:03:04.936 [INFO][4900] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.67/32] ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.014267 containerd[2021]: 2025-01-17 12:03:04.936 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie488db81b5a ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.014267 containerd[2021]: 2025-01-17 12:03:04.956 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.014267 containerd[2021]: 2025-01-17 12:03:04.957 [INFO][4900] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54", Pod:"calico-apiserver-5dd85d45b4-n44qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie488db81b5a", MAC:"a2:cf:61:23:34:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:05.014267 containerd[2021]: 2025-01-17 12:03:05.003 [INFO][4900] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54" Namespace="calico-apiserver" Pod="calico-apiserver-5dd85d45b4-n44qp" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:05.029641 systemd[1]: Started cri-containerd-429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335.scope - libcontainer container 429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335. Jan 17 12:03:05.068101 containerd[2021]: time="2025-01-17T12:03:05.067408705Z" level=info msg="StopPodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\"" Jan 17 12:03:05.110720 systemd-networkd[1933]: cali4020e8bed3c: Link UP Jan 17 12:03:05.117810 systemd-networkd[1933]: cali4020e8bed3c: Gained carrier Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.580 [INFO][4889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.617 [INFO][4889] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0 coredns-76f75df574- kube-system f06e3531-08e0-4afd-9376-50b984ff63bd 751 0 2025-01-17 12:02:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-222 coredns-76f75df574-wmqjg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4020e8bed3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.617 [INFO][4889] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.796 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" HandleID="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.840 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" HandleID="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003eba00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-222", "pod":"coredns-76f75df574-wmqjg", "timestamp":"2025-01-17 12:03:04.795083248 +0000 UTC"}, Hostname:"ip-172-31-30-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.840 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.924 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.925 [INFO][4917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-222' Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.930 [INFO][4917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.948 [INFO][4917] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.978 [INFO][4917] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:04.993 [INFO][4917] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.005 [INFO][4917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.005 [INFO][4917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.015 [INFO][4917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49 Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.040 [INFO][4917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.070 [INFO][4917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.68/26] block=192.168.94.64/26 handle="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.070 [INFO][4917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.68/26] handle="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" host="ip-172-31-30-222" Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.070 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:05.174129 containerd[2021]: 2025-01-17 12:03:05.071 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.68/26] IPv6=[] ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" HandleID="k8s-pod-network.27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.177554 containerd[2021]: 2025-01-17 12:03:05.093 [INFO][4889] cni-plugin/k8s.go 386: Populated endpoint ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f06e3531-08e0-4afd-9376-50b984ff63bd", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"", Pod:"coredns-76f75df574-wmqjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4020e8bed3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:05.177554 containerd[2021]: 2025-01-17 12:03:05.094 [INFO][4889] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.68/32] ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.177554 containerd[2021]: 2025-01-17 12:03:05.094 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4020e8bed3c ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.177554 containerd[2021]: 2025-01-17 12:03:05.131 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.177554 containerd[2021]: 2025-01-17 12:03:05.142 [INFO][4889] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f06e3531-08e0-4afd-9376-50b984ff63bd", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49", Pod:"coredns-76f75df574-wmqjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4020e8bed3c", MAC:"de:6e:99:33:40:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:05.177554 containerd[2021]: 2025-01-17 12:03:05.162 [INFO][4889] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49" Namespace="kube-system" Pod="coredns-76f75df574-wmqjg" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:05.232316 containerd[2021]: time="2025-01-17T12:03:05.232114814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:05.233800 containerd[2021]: time="2025-01-17T12:03:05.233706218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:05.234099 containerd[2021]: time="2025-01-17T12:03:05.234013226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:05.237215 containerd[2021]: time="2025-01-17T12:03:05.235980938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:05.299691 systemd[1]: run-netns-cni\x2d0089ddf1\x2df1de\x2d8514\x2da9d1\x2d1b384429ee54.mount: Deactivated successfully. Jan 17 12:03:05.343561 containerd[2021]: time="2025-01-17T12:03:05.342744315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgqbn,Uid:0b585308-aaed-4559-ba72-c781c44b8b0e,Namespace:calico-system,Attempt:1,} returns sandbox id \"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335\"" Jan 17 12:03:05.367246 containerd[2021]: time="2025-01-17T12:03:05.365513379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:05.367246 containerd[2021]: time="2025-01-17T12:03:05.365643807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:05.367246 containerd[2021]: time="2025-01-17T12:03:05.365680995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:05.367246 containerd[2021]: time="2025-01-17T12:03:05.365866935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:05.456550 systemd[1]: Started cri-containerd-a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54.scope - libcontainer container a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54. Jan 17 12:03:05.484467 systemd[1]: Started cri-containerd-27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49.scope - libcontainer container 27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49. Jan 17 12:03:05.589958 containerd[2021]: time="2025-01-17T12:03:05.589898692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wmqjg,Uid:f06e3531-08e0-4afd-9376-50b984ff63bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49\"" Jan 17 12:03:05.616605 containerd[2021]: time="2025-01-17T12:03:05.614625532Z" level=info msg="CreateContainer within sandbox \"27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:03:05.644163 containerd[2021]: time="2025-01-17T12:03:05.644082856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd85d45b4-n44qp,Uid:dfc7f148-4d51-48e8-9fb2-faa63e0fdc30,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54\"" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.510 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.510 [INFO][5018] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" iface="eth0" netns="/var/run/netns/cni-37702a38-f52d-457d-de89-95996304217f" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.511 [INFO][5018] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" iface="eth0" netns="/var/run/netns/cni-37702a38-f52d-457d-de89-95996304217f" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.513 [INFO][5018] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" iface="eth0" netns="/var/run/netns/cni-37702a38-f52d-457d-de89-95996304217f" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.513 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.519 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.603 [INFO][5111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.603 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.604 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.632 [WARNING][5111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.632 [INFO][5111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.635 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:05.644881 containerd[2021]: 2025-01-17 12:03:05.638 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:05.650213 containerd[2021]: time="2025-01-17T12:03:05.650145556Z" level=info msg="TearDown network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" successfully" Jan 17 12:03:05.650514 containerd[2021]: time="2025-01-17T12:03:05.650200972Z" level=info msg="StopPodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" returns successfully" Jan 17 12:03:05.652124 systemd[1]: run-netns-cni\x2d37702a38\x2df52d\x2d457d\x2dde89\x2d95996304217f.mount: Deactivated successfully. Jan 17 12:03:05.654841 containerd[2021]: time="2025-01-17T12:03:05.652996240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb958bfbb-cc2sj,Uid:51ad13b0-e571-4bda-9060-30f841760976,Namespace:calico-system,Attempt:1,}" Jan 17 12:03:05.669851 containerd[2021]: time="2025-01-17T12:03:05.669736516Z" level=info msg="CreateContainer within sandbox \"27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67a67914f6ec53e656926b069cb4557136503bb731914f20f597d65cca0f9781\"" Jan 17 12:03:05.671135 containerd[2021]: time="2025-01-17T12:03:05.670757896Z" level=info msg="StartContainer for \"67a67914f6ec53e656926b069cb4557136503bb731914f20f597d65cca0f9781\"" Jan 17 12:03:05.737377 systemd[1]: Started cri-containerd-67a67914f6ec53e656926b069cb4557136503bb731914f20f597d65cca0f9781.scope - libcontainer container 67a67914f6ec53e656926b069cb4557136503bb731914f20f597d65cca0f9781. Jan 17 12:03:05.824535 containerd[2021]: time="2025-01-17T12:03:05.824274461Z" level=info msg="StartContainer for \"67a67914f6ec53e656926b069cb4557136503bb731914f20f597d65cca0f9781\" returns successfully" Jan 17 12:03:05.971106 systemd-networkd[1933]: caliefd0b1742a9: Link UP Jan 17 12:03:05.973944 systemd-networkd[1933]: caliefd0b1742a9: Gained carrier Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.758 [INFO][5138] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.784 [INFO][5138] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0 calico-kube-controllers-7bb958bfbb- calico-system 51ad13b0-e571-4bda-9060-30f841760976 767 0 2025-01-17 12:02:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bb958bfbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-222 calico-kube-controllers-7bb958bfbb-cc2sj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefd0b1742a9 [] []}} ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.784 [INFO][5138] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.872 [INFO][5171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" HandleID="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.900 [INFO][5171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" HandleID="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029ee10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-222", "pod":"calico-kube-controllers-7bb958bfbb-cc2sj", "timestamp":"2025-01-17 12:03:05.872662673 +0000 UTC"}, Hostname:"ip-172-31-30-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.901 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.901 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.901 [INFO][5171] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-222' Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.905 [INFO][5171] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.925 [INFO][5171] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.933 [INFO][5171] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.937 [INFO][5171] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.941 [INFO][5171] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.941 [INFO][5171] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.944 [INFO][5171] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7 Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.950 [INFO][5171] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.961 [INFO][5171] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.69/26] block=192.168.94.64/26 handle="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.961 [INFO][5171] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.69/26] handle="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" host="ip-172-31-30-222" Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.961 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:06.005014 containerd[2021]: 2025-01-17 12:03:05.961 [INFO][5171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.69/26] IPv6=[] ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" HandleID="k8s-pod-network.f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.006764 containerd[2021]: 2025-01-17 12:03:05.965 [INFO][5138] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0", GenerateName:"calico-kube-controllers-7bb958bfbb-", Namespace:"calico-system", SelfLink:"", UID:"51ad13b0-e571-4bda-9060-30f841760976", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb958bfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"", Pod:"calico-kube-controllers-7bb958bfbb-cc2sj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefd0b1742a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:06.006764 containerd[2021]: 2025-01-17 12:03:05.965 [INFO][5138] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.69/32] ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.006764 containerd[2021]: 2025-01-17 12:03:05.965 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefd0b1742a9 ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.006764 containerd[2021]: 2025-01-17 12:03:05.973 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.006764 containerd[2021]: 2025-01-17 12:03:05.975 [INFO][5138] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0", GenerateName:"calico-kube-controllers-7bb958bfbb-", Namespace:"calico-system", SelfLink:"", UID:"51ad13b0-e571-4bda-9060-30f841760976", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb958bfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7", Pod:"calico-kube-controllers-7bb958bfbb-cc2sj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefd0b1742a9", MAC:"1e:12:3e:b2:05:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:06.006764 containerd[2021]: 2025-01-17 12:03:06.000 [INFO][5138] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7" Namespace="calico-system" Pod="calico-kube-controllers-7bb958bfbb-cc2sj" WorkloadEndpoint="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:06.043248 containerd[2021]: time="2025-01-17T12:03:06.042806822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:06.043248 containerd[2021]: time="2025-01-17T12:03:06.042985334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:06.043248 containerd[2021]: time="2025-01-17T12:03:06.043064234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:06.043248 containerd[2021]: time="2025-01-17T12:03:06.043264034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:06.075355 systemd-networkd[1933]: calid7e22fe52c5: Gained IPv6LL Jan 17 12:03:06.075848 systemd-networkd[1933]: calie488db81b5a: Gained IPv6LL Jan 17 12:03:06.077306 systemd[1]: Started cri-containerd-f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7.scope - libcontainer container f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7. Jan 17 12:03:06.153455 containerd[2021]: time="2025-01-17T12:03:06.153368859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb958bfbb-cc2sj,Uid:51ad13b0-e571-4bda-9060-30f841760976,Namespace:calico-system,Attempt:1,} returns sandbox id \"f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7\"" Jan 17 12:03:06.553385 kubelet[3390]: I0117 12:03:06.553279 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wmqjg" podStartSLOduration=37.553218449 podStartE2EDuration="37.553218449s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:06.541688201 +0000 UTC m=+49.746466556" watchObservedRunningTime="2025-01-17 12:03:06.553218449 +0000 UTC m=+49.757996780" Jan 17 12:03:06.870856 kubelet[3390]: I0117 12:03:06.870170 3390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:07.101138 systemd-networkd[1933]: cali4020e8bed3c: Gained IPv6LL Jan 17 12:03:07.164726 systemd-networkd[1933]: caliefd0b1742a9: Gained IPv6LL Jan 17 12:03:07.735601 systemd[1]: Started sshd@9-172.31.30.222:22-139.178.68.195:50624.service - OpenSSH per-connection server daemon (139.178.68.195:50624). Jan 17 12:03:07.935793 sshd[5288]: Accepted publickey for core from 139.178.68.195 port 50624 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:07.939729 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:07.961678 systemd-logind[1994]: New session 10 of user core. Jan 17 12:03:07.968672 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:03:08.054270 containerd[2021]: time="2025-01-17T12:03:08.053170120Z" level=info msg="StopPodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\"" Jan 17 12:03:08.498202 sshd[5288]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:08.516124 systemd-logind[1994]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:03:08.517213 systemd[1]: sshd@9-172.31.30.222:22-139.178.68.195:50624.service: Deactivated successfully. Jan 17 12:03:08.525818 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:03:08.533302 systemd-logind[1994]: Removed session 10. Jan 17 12:03:08.564159 containerd[2021]: time="2025-01-17T12:03:08.564103879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:08.570241 containerd[2021]: time="2025-01-17T12:03:08.568511491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:03:08.570241 containerd[2021]: time="2025-01-17T12:03:08.568670911Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:08.579605 containerd[2021]: time="2025-01-17T12:03:08.579534055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:08.582111 containerd[2021]: time="2025-01-17T12:03:08.581961571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.69999014s" Jan 17 12:03:08.582111 containerd[2021]: time="2025-01-17T12:03:08.582106831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:03:08.583203 containerd[2021]: time="2025-01-17T12:03:08.583134883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:03:08.593000 containerd[2021]: time="2025-01-17T12:03:08.592762015Z" level=info msg="CreateContainer within sandbox \"5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.419 [INFO][5311] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.419 [INFO][5311] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" iface="eth0" netns="/var/run/netns/cni-a29c62dd-1005-b653-1c66-74b70f998295" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.419 [INFO][5311] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" iface="eth0" netns="/var/run/netns/cni-a29c62dd-1005-b653-1c66-74b70f998295" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.429 [INFO][5311] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" iface="eth0" netns="/var/run/netns/cni-a29c62dd-1005-b653-1c66-74b70f998295" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.431 [INFO][5311] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.432 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.581 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.582 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.582 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.617 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.617 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.625 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:08.654745 containerd[2021]: 2025-01-17 12:03:08.641 [INFO][5311] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:08.657807 containerd[2021]: time="2025-01-17T12:03:08.655006939Z" level=info msg="TearDown network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" successfully" Jan 17 12:03:08.657807 containerd[2021]: time="2025-01-17T12:03:08.655133587Z" level=info msg="StopPodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" returns successfully" Jan 17 12:03:08.659560 containerd[2021]: time="2025-01-17T12:03:08.659376739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m26cv,Uid:238a61a5-b6b5-4a74-b87d-37070ed73575,Namespace:kube-system,Attempt:1,}" Jan 17 12:03:08.675604 systemd[1]: run-netns-cni\x2da29c62dd\x2d1005\x2db653\x2d1c66\x2d74b70f998295.mount: Deactivated successfully. Jan 17 12:03:08.706478 containerd[2021]: time="2025-01-17T12:03:08.706291891Z" level=info msg="CreateContainer within sandbox \"5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"383e1ca4c3f4c88b7ac3b9e6501ad76f306cc368882b4f326a6bae2ac5700244\"" Jan 17 12:03:08.709456 containerd[2021]: time="2025-01-17T12:03:08.707795155Z" level=info msg="StartContainer for \"383e1ca4c3f4c88b7ac3b9e6501ad76f306cc368882b4f326a6bae2ac5700244\"" Jan 17 12:03:08.737079 kernel: bpftool[5360]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:03:08.856344 systemd[1]: Started cri-containerd-383e1ca4c3f4c88b7ac3b9e6501ad76f306cc368882b4f326a6bae2ac5700244.scope - libcontainer container 383e1ca4c3f4c88b7ac3b9e6501ad76f306cc368882b4f326a6bae2ac5700244. Jan 17 12:03:09.089340 containerd[2021]: time="2025-01-17T12:03:09.086971205Z" level=info msg="StartContainer for \"383e1ca4c3f4c88b7ac3b9e6501ad76f306cc368882b4f326a6bae2ac5700244\" returns successfully" Jan 17 12:03:09.219672 systemd-networkd[1933]: cali5befe37be77: Link UP Jan 17 12:03:09.223965 systemd-networkd[1933]: cali5befe37be77: Gained carrier Jan 17 12:03:09.233794 (udev-worker)[5415]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:08.909 [INFO][5361] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0 coredns-76f75df574- kube-system 238a61a5-b6b5-4a74-b87d-37070ed73575 834 0 2025-01-17 12:02:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-222 coredns-76f75df574-m26cv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5befe37be77 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:08.911 [INFO][5361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.110 [INFO][5395] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" HandleID="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.141 [INFO][5395] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" HandleID="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a3c50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-222", "pod":"coredns-76f75df574-m26cv", "timestamp":"2025-01-17 12:03:09.110547449 +0000 UTC"}, Hostname:"ip-172-31-30-222", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.142 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.142 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.142 [INFO][5395] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-222' Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.145 [INFO][5395] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.155 [INFO][5395] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.166 [INFO][5395] ipam/ipam.go 489: Trying affinity for 192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.171 [INFO][5395] ipam/ipam.go 155: Attempting to load block cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.177 [INFO][5395] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.178 [INFO][5395] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.182 [INFO][5395] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.190 [INFO][5395] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.205 [INFO][5395] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.94.70/26] block=192.168.94.64/26 handle="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.205 [INFO][5395] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.70/26] handle="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" host="ip-172-31-30-222" Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.206 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:09.261275 containerd[2021]: 2025-01-17 12:03:09.206 [INFO][5395] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.70/26] IPv6=[] ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" HandleID="k8s-pod-network.c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.264314 containerd[2021]: 2025-01-17 12:03:09.211 [INFO][5361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"238a61a5-b6b5-4a74-b87d-37070ed73575", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"", Pod:"coredns-76f75df574-m26cv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5befe37be77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:09.264314 containerd[2021]: 2025-01-17 12:03:09.212 [INFO][5361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.94.70/32] ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.264314 containerd[2021]: 2025-01-17 12:03:09.212 [INFO][5361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5befe37be77 ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.264314 containerd[2021]: 2025-01-17 12:03:09.222 [INFO][5361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.264314 containerd[2021]: 2025-01-17 12:03:09.223 [INFO][5361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"238a61a5-b6b5-4a74-b87d-37070ed73575", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f", Pod:"coredns-76f75df574-m26cv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5befe37be77", MAC:"6e:1a:bf:42:25:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:09.264314 containerd[2021]: 2025-01-17 12:03:09.250 [INFO][5361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f" Namespace="kube-system" Pod="coredns-76f75df574-m26cv" WorkloadEndpoint="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:09.324252 containerd[2021]: time="2025-01-17T12:03:09.324000366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:09.324967 containerd[2021]: time="2025-01-17T12:03:09.324260778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:09.324967 containerd[2021]: time="2025-01-17T12:03:09.324441942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:09.325661 containerd[2021]: time="2025-01-17T12:03:09.325406730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:09.399390 systemd[1]: Started cri-containerd-c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f.scope - libcontainer container c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f. Jan 17 12:03:09.508617 containerd[2021]: time="2025-01-17T12:03:09.508448995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m26cv,Uid:238a61a5-b6b5-4a74-b87d-37070ed73575,Namespace:kube-system,Attempt:1,} returns sandbox id \"c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f\"" Jan 17 12:03:09.517129 containerd[2021]: time="2025-01-17T12:03:09.516924151Z" level=info msg="CreateContainer within sandbox \"c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:03:09.547389 (udev-worker)[5419]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:09.558086 systemd-networkd[1933]: vxlan.calico: Link UP Jan 17 12:03:09.558102 systemd-networkd[1933]: vxlan.calico: Gained carrier Jan 17 12:03:09.567138 containerd[2021]: time="2025-01-17T12:03:09.562850804Z" level=info msg="CreateContainer within sandbox \"c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9c269de769fd7bd26adff642d239a80f14dc0230a9f1cf51ecdb845392a5c19\"" Jan 17 12:03:09.570843 containerd[2021]: time="2025-01-17T12:03:09.570680360Z" level=info msg="StartContainer for \"f9c269de769fd7bd26adff642d239a80f14dc0230a9f1cf51ecdb845392a5c19\"" Jan 17 12:03:09.726375 systemd[1]: Started cri-containerd-f9c269de769fd7bd26adff642d239a80f14dc0230a9f1cf51ecdb845392a5c19.scope - libcontainer container f9c269de769fd7bd26adff642d239a80f14dc0230a9f1cf51ecdb845392a5c19. Jan 17 12:03:09.848476 containerd[2021]: time="2025-01-17T12:03:09.846782925Z" level=info msg="StartContainer for \"f9c269de769fd7bd26adff642d239a80f14dc0230a9f1cf51ecdb845392a5c19\" returns successfully" Jan 17 12:03:10.591235 kubelet[3390]: I0117 12:03:10.591197 3390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:10.615003 kubelet[3390]: I0117 12:03:10.614924 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m26cv" podStartSLOduration=41.614856645 podStartE2EDuration="41.614856645s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:10.613509837 +0000 UTC m=+53.818288192" watchObservedRunningTime="2025-01-17 12:03:10.614856645 +0000 UTC m=+53.819634964" Jan 17 12:03:10.615003 kubelet[3390]: I0117 12:03:10.615155 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dd85d45b4-xw927" podStartSLOduration=26.913713965 podStartE2EDuration="31.615098493s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="2025-01-17 12:03:03.881175831 +0000 UTC m=+47.085954162" lastFinishedPulling="2025-01-17 12:03:08.582560299 +0000 UTC m=+51.787338690" observedRunningTime="2025-01-17 12:03:09.614401436 +0000 UTC m=+52.819179779" watchObservedRunningTime="2025-01-17 12:03:10.615098493 +0000 UTC m=+53.819876824" Jan 17 12:03:11.131326 systemd-networkd[1933]: cali5befe37be77: Gained IPv6LL Jan 17 12:03:11.276089 containerd[2021]: time="2025-01-17T12:03:11.275430800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:11.278652 containerd[2021]: time="2025-01-17T12:03:11.278594192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:03:11.280526 containerd[2021]: time="2025-01-17T12:03:11.280464860Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:11.293079 containerd[2021]: time="2025-01-17T12:03:11.289350200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:11.293079 containerd[2021]: time="2025-01-17T12:03:11.290942120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.707741741s" Jan 17 12:03:11.293079 containerd[2021]: time="2025-01-17T12:03:11.290997416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:03:11.295355 containerd[2021]: time="2025-01-17T12:03:11.295283288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:03:11.298620 containerd[2021]: time="2025-01-17T12:03:11.298527848Z" level=info msg="CreateContainer within sandbox \"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:03:11.335084 containerd[2021]: time="2025-01-17T12:03:11.331960232Z" level=info msg="CreateContainer within sandbox \"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bf8e1b845f3d472c7fb2603872bfe9d7c690056a87f0793f8db0666939752645\"" Jan 17 12:03:11.336386 containerd[2021]: time="2025-01-17T12:03:11.336322484Z" level=info msg="StartContainer for \"bf8e1b845f3d472c7fb2603872bfe9d7c690056a87f0793f8db0666939752645\"" Jan 17 12:03:11.444960 systemd[1]: Started cri-containerd-bf8e1b845f3d472c7fb2603872bfe9d7c690056a87f0793f8db0666939752645.scope - libcontainer container bf8e1b845f3d472c7fb2603872bfe9d7c690056a87f0793f8db0666939752645. Jan 17 12:03:11.579865 systemd-networkd[1933]: vxlan.calico: Gained IPv6LL Jan 17 12:03:11.581531 containerd[2021]: time="2025-01-17T12:03:11.580361578Z" level=info msg="StartContainer for \"bf8e1b845f3d472c7fb2603872bfe9d7c690056a87f0793f8db0666939752645\" returns successfully" Jan 17 12:03:11.632281 containerd[2021]: time="2025-01-17T12:03:11.632056618Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:11.634009 containerd[2021]: time="2025-01-17T12:03:11.633358150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:03:11.645303 containerd[2021]: time="2025-01-17T12:03:11.645174490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 349.80917ms" Jan 17 12:03:11.645303 containerd[2021]: time="2025-01-17T12:03:11.645240958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:03:11.647047 containerd[2021]: time="2025-01-17T12:03:11.646228738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:03:11.650824 containerd[2021]: time="2025-01-17T12:03:11.650734534Z" level=info msg="CreateContainer within sandbox \"a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:03:11.683857 containerd[2021]: time="2025-01-17T12:03:11.683680606Z" level=info msg="CreateContainer within sandbox \"a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6ee352814fcf4011df7ac07ab1d31be807bc5f679f82cc0ae1ac238b4ddb575\"" Jan 17 12:03:11.687264 containerd[2021]: time="2025-01-17T12:03:11.687183910Z" level=info msg="StartContainer for \"c6ee352814fcf4011df7ac07ab1d31be807bc5f679f82cc0ae1ac238b4ddb575\"" Jan 17 12:03:11.776402 systemd[1]: Started cri-containerd-c6ee352814fcf4011df7ac07ab1d31be807bc5f679f82cc0ae1ac238b4ddb575.scope - libcontainer container c6ee352814fcf4011df7ac07ab1d31be807bc5f679f82cc0ae1ac238b4ddb575. Jan 17 12:03:11.877543 containerd[2021]: time="2025-01-17T12:03:11.876391415Z" level=info msg="StartContainer for \"c6ee352814fcf4011df7ac07ab1d31be807bc5f679f82cc0ae1ac238b4ddb575\" returns successfully" Jan 17 12:03:13.548506 systemd[1]: Started sshd@10-172.31.30.222:22-139.178.68.195:50634.service - OpenSSH per-connection server daemon (139.178.68.195:50634). Jan 17 12:03:13.611931 kubelet[3390]: I0117 12:03:13.611874 3390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:13.736652 sshd[5669]: Accepted publickey for core from 139.178.68.195 port 50634 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:13.741953 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:13.754459 systemd-logind[1994]: New session 11 of user core. Jan 17 12:03:13.760323 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:03:14.056421 sshd[5669]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:14.061899 systemd-logind[1994]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:03:14.063792 systemd[1]: sshd@10-172.31.30.222:22-139.178.68.195:50634.service: Deactivated successfully. Jan 17 12:03:14.068630 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:03:14.074260 systemd-logind[1994]: Removed session 11. Jan 17 12:03:14.503197 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.94.64:123 Jan 17 12:03:14.504091 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.94.64:123 Jan 17 12:03:14.504091 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 8 calic359e3322c7 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 12:03:14.504091 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 9 calid7e22fe52c5 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 17 12:03:14.504091 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 10 calie488db81b5a [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 12:03:14.504091 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 11 cali4020e8bed3c [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:03:14.504091 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 12 caliefd0b1742a9 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:03:14.503659 ntpd[1987]: Listen normally on 8 calic359e3322c7 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 12:03:14.504738 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 13 cali5befe37be77 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:03:14.504738 ntpd[1987]: 17 Jan 12:03:14 ntpd[1987]: Listen normally on 14 vxlan.calico [fe80::649b:9aff:fe76:4717%10]:123 Jan 17 12:03:14.503750 ntpd[1987]: Listen normally on 9 calid7e22fe52c5 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 17 12:03:14.503819 ntpd[1987]: Listen normally on 10 calie488db81b5a [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 12:03:14.503906 ntpd[1987]: Listen normally on 11 cali4020e8bed3c [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:03:14.503986 ntpd[1987]: Listen normally on 12 caliefd0b1742a9 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:03:14.504094 ntpd[1987]: Listen normally on 13 cali5befe37be77 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:03:14.504187 ntpd[1987]: Listen normally on 14 vxlan.calico [fe80::649b:9aff:fe76:4717%10]:123 Jan 17 12:03:15.227183 containerd[2021]: time="2025-01-17T12:03:15.226333788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:03:15.227183 containerd[2021]: time="2025-01-17T12:03:15.226364712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:15.230533 containerd[2021]: time="2025-01-17T12:03:15.230457756Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:15.231940 containerd[2021]: time="2025-01-17T12:03:15.231821748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:15.234467 containerd[2021]: time="2025-01-17T12:03:15.233504844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.587214378s" Jan 17 12:03:15.234467 containerd[2021]: time="2025-01-17T12:03:15.233561880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:03:15.236389 containerd[2021]: time="2025-01-17T12:03:15.236312760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:03:15.276256 containerd[2021]: time="2025-01-17T12:03:15.276187272Z" level=info msg="CreateContainer within sandbox \"f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:03:15.310185 containerd[2021]: time="2025-01-17T12:03:15.310089780Z" level=info msg="CreateContainer within sandbox \"f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c\"" Jan 17 12:03:15.312290 containerd[2021]: time="2025-01-17T12:03:15.311384496Z" level=info msg="StartContainer for \"f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c\"" Jan 17 12:03:15.379797 systemd[1]: Started cri-containerd-f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c.scope - libcontainer container f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c. Jan 17 12:03:15.489272 containerd[2021]: time="2025-01-17T12:03:15.489086053Z" level=info msg="StartContainer for \"f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c\" returns successfully" Jan 17 12:03:15.677880 kubelet[3390]: I0117 12:03:15.677775 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dd85d45b4-n44qp" podStartSLOduration=30.6878612 podStartE2EDuration="36.677708114s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="2025-01-17 12:03:05.656092456 +0000 UTC m=+48.860870787" lastFinishedPulling="2025-01-17 12:03:11.645939358 +0000 UTC m=+54.850717701" observedRunningTime="2025-01-17 12:03:12.632138963 +0000 UTC m=+55.836917330" watchObservedRunningTime="2025-01-17 12:03:15.677708114 +0000 UTC m=+58.882486433" Jan 17 12:03:15.759296 kubelet[3390]: I0117 12:03:15.759101 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bb958bfbb-cc2sj" podStartSLOduration=26.682111913 podStartE2EDuration="35.758991782s" podCreationTimestamp="2025-01-17 12:02:40 +0000 UTC" firstStartedPulling="2025-01-17 12:03:06.157102479 +0000 UTC m=+49.361880810" lastFinishedPulling="2025-01-17 12:03:15.233982348 +0000 UTC m=+58.438760679" observedRunningTime="2025-01-17 12:03:15.681880358 +0000 UTC m=+58.886658701" watchObservedRunningTime="2025-01-17 12:03:15.758991782 +0000 UTC m=+58.963770113" Jan 17 12:03:16.729617 containerd[2021]: time="2025-01-17T12:03:16.729549771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:16.731049 containerd[2021]: time="2025-01-17T12:03:16.730964835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:03:16.731986 containerd[2021]: time="2025-01-17T12:03:16.731890839Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:16.737228 containerd[2021]: time="2025-01-17T12:03:16.737135343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:16.738771 containerd[2021]: time="2025-01-17T12:03:16.738539151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.502154979s" Jan 17 12:03:16.738771 containerd[2021]: time="2025-01-17T12:03:16.738601299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:03:16.742958 containerd[2021]: time="2025-01-17T12:03:16.742868727Z" level=info msg="CreateContainer within sandbox \"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:03:16.768670 containerd[2021]: time="2025-01-17T12:03:16.768342327Z" level=info msg="CreateContainer within sandbox \"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d1e0ef507ab0ff291ddb8377ccf5d4d55fda1c609b8542e53c66680ac6c47b13\"" Jan 17 12:03:16.772261 containerd[2021]: time="2025-01-17T12:03:16.772118631Z" level=info msg="StartContainer for \"d1e0ef507ab0ff291ddb8377ccf5d4d55fda1c609b8542e53c66680ac6c47b13\"" Jan 17 12:03:16.836348 systemd[1]: Started cri-containerd-d1e0ef507ab0ff291ddb8377ccf5d4d55fda1c609b8542e53c66680ac6c47b13.scope - libcontainer container d1e0ef507ab0ff291ddb8377ccf5d4d55fda1c609b8542e53c66680ac6c47b13. Jan 17 12:03:16.890557 containerd[2021]: time="2025-01-17T12:03:16.889887016Z" level=info msg="StartContainer for \"d1e0ef507ab0ff291ddb8377ccf5d4d55fda1c609b8542e53c66680ac6c47b13\" returns successfully" Jan 17 12:03:16.971978 containerd[2021]: time="2025-01-17T12:03:16.971873824Z" level=info msg="StopPodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\"" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.045 [WARNING][5802] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b585308-aaed-4559-ba72-c781c44b8b0e", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335", Pod:"csi-node-driver-bgqbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7e22fe52c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.045 [INFO][5802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.045 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" iface="eth0" netns="" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.045 [INFO][5802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.045 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.096 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.097 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.097 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.109 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.110 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.112 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:17.118240 containerd[2021]: 2025-01-17 12:03:17.115 [INFO][5802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.119349 containerd[2021]: time="2025-01-17T12:03:17.118197721Z" level=info msg="TearDown network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" successfully" Jan 17 12:03:17.119349 containerd[2021]: time="2025-01-17T12:03:17.119165797Z" level=info msg="StopPodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" returns successfully" Jan 17 12:03:17.121188 containerd[2021]: time="2025-01-17T12:03:17.120678781Z" level=info msg="RemovePodSandbox for \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\"" Jan 17 12:03:17.121188 containerd[2021]: time="2025-01-17T12:03:17.120755917Z" level=info msg="Forcibly stopping sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\"" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.204 [WARNING][5828] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b585308-aaed-4559-ba72-c781c44b8b0e", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"429236ecaf88851018bb0d9d94e94b8d04480cd0c8824a22cdfa70ca793c6335", Pod:"csi-node-driver-bgqbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7e22fe52c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.204 [INFO][5828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.204 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" iface="eth0" netns="" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.204 [INFO][5828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.204 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.243 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.244 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.244 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.257 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.257 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" HandleID="k8s-pod-network.4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Workload="ip--172--31--30--222-k8s-csi--node--driver--bgqbn-eth0" Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.261 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:17.266010 containerd[2021]: 2025-01-17 12:03:17.263 [INFO][5828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8" Jan 17 12:03:17.266872 containerd[2021]: time="2025-01-17T12:03:17.266643734Z" level=info msg="TearDown network for sandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" successfully" Jan 17 12:03:17.271605 containerd[2021]: time="2025-01-17T12:03:17.271542506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:17.271911 containerd[2021]: time="2025-01-17T12:03:17.271651334Z" level=info msg="RemovePodSandbox \"4a0394130f2c2f70878bb54650dc14e2cec96a70e3281ef5271b18f382f213a8\" returns successfully" Jan 17 12:03:17.272973 containerd[2021]: time="2025-01-17T12:03:17.272560430Z" level=info msg="StopPodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\"" Jan 17 12:03:17.290537 kubelet[3390]: I0117 12:03:17.289779 3390 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:03:17.290537 kubelet[3390]: I0117 12:03:17.289847 3390 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.402 [WARNING][5852] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54", Pod:"calico-apiserver-5dd85d45b4-n44qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie488db81b5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.403 [INFO][5852] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.403 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" iface="eth0" netns="" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.403 [INFO][5852] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.403 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.478 [INFO][5858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.478 [INFO][5858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.479 [INFO][5858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.493 [WARNING][5858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.493 [INFO][5858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.495 [INFO][5858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:17.501079 containerd[2021]: 2025-01-17 12:03:17.498 [INFO][5852] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.501079 containerd[2021]: time="2025-01-17T12:03:17.500918751Z" level=info msg="TearDown network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" successfully" Jan 17 12:03:17.501079 containerd[2021]: time="2025-01-17T12:03:17.500959275Z" level=info msg="StopPodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" returns successfully" Jan 17 12:03:17.502985 containerd[2021]: time="2025-01-17T12:03:17.502935987Z" level=info msg="RemovePodSandbox for \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\"" Jan 17 12:03:17.503169 containerd[2021]: time="2025-01-17T12:03:17.502994415Z" level=info msg="Forcibly stopping sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\"" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.572 [WARNING][5876] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"dfc7f148-4d51-48e8-9fb2-faa63e0fdc30", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"a3bcfb27e73958bd77938fba33718873f03c6a1622b1559b3c41b0b00a700d54", Pod:"calico-apiserver-5dd85d45b4-n44qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie488db81b5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.572 [INFO][5876] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.572 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" iface="eth0" netns="" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.572 [INFO][5876] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.572 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.615 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.616 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.616 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.629 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.629 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" HandleID="k8s-pod-network.499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--n44qp-eth0" Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.632 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:17.637555 containerd[2021]: 2025-01-17 12:03:17.634 [INFO][5876] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6" Jan 17 12:03:17.638409 containerd[2021]: time="2025-01-17T12:03:17.637599940Z" level=info msg="TearDown network for sandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" successfully" Jan 17 12:03:17.644966 containerd[2021]: time="2025-01-17T12:03:17.644809396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:17.645762 containerd[2021]: time="2025-01-17T12:03:17.645261364Z" level=info msg="RemovePodSandbox \"499cdb5b08975b28395b52b06d054e14ca7f0d783c1be5ac60e1c86dbff0beb6\" returns successfully" Jan 17 12:03:17.646417 containerd[2021]: time="2025-01-17T12:03:17.646181320Z" level=info msg="StopPodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\"" Jan 17 12:03:17.720329 kubelet[3390]: I0117 12:03:17.718852 3390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-bgqbn" podStartSLOduration=26.327125812 podStartE2EDuration="37.718664224s" podCreationTimestamp="2025-01-17 12:02:40 +0000 UTC" firstStartedPulling="2025-01-17 12:03:05.347798259 +0000 UTC m=+48.552576578" lastFinishedPulling="2025-01-17 12:03:16.739336659 +0000 UTC m=+59.944114990" observedRunningTime="2025-01-17 12:03:17.715626532 +0000 UTC m=+60.920404887" watchObservedRunningTime="2025-01-17 12:03:17.718664224 +0000 UTC m=+60.923442651" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.803 [WARNING][5900] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0", GenerateName:"calico-kube-controllers-7bb958bfbb-", Namespace:"calico-system", SelfLink:"", UID:"51ad13b0-e571-4bda-9060-30f841760976", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb958bfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7", Pod:"calico-kube-controllers-7bb958bfbb-cc2sj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefd0b1742a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.806 [INFO][5900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.806 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" iface="eth0" netns="" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.806 [INFO][5900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.806 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.902 [INFO][5911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.903 [INFO][5911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.905 [INFO][5911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.931 [WARNING][5911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.931 [INFO][5911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.934 [INFO][5911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:17.941768 containerd[2021]: 2025-01-17 12:03:17.936 [INFO][5900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:17.945621 containerd[2021]: time="2025-01-17T12:03:17.945203525Z" level=info msg="TearDown network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" successfully" Jan 17 12:03:17.945621 containerd[2021]: time="2025-01-17T12:03:17.945279557Z" level=info msg="StopPodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" returns successfully" Jan 17 12:03:17.946180 containerd[2021]: time="2025-01-17T12:03:17.946111385Z" level=info msg="RemovePodSandbox for \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\"" Jan 17 12:03:17.946180 containerd[2021]: time="2025-01-17T12:03:17.946173269Z" level=info msg="Forcibly stopping sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\"" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.030 [WARNING][5929] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0", GenerateName:"calico-kube-controllers-7bb958bfbb-", Namespace:"calico-system", SelfLink:"", UID:"51ad13b0-e571-4bda-9060-30f841760976", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb958bfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"f65f5aa0101bc2b2b6e019a26f6b3e314e1341cbf853c071b64a0fb0b147a2c7", Pod:"calico-kube-controllers-7bb958bfbb-cc2sj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefd0b1742a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.030 [INFO][5929] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.030 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" iface="eth0" netns="" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.030 [INFO][5929] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.030 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.068 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.068 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.068 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.087 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.087 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" HandleID="k8s-pod-network.0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Workload="ip--172--31--30--222-k8s-calico--kube--controllers--7bb958bfbb--cc2sj-eth0" Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.095 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.103496 containerd[2021]: 2025-01-17 12:03:18.099 [INFO][5929] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4" Jan 17 12:03:18.104413 containerd[2021]: time="2025-01-17T12:03:18.103550222Z" level=info msg="TearDown network for sandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" successfully" Jan 17 12:03:18.108908 containerd[2021]: time="2025-01-17T12:03:18.108776702Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:18.108908 containerd[2021]: time="2025-01-17T12:03:18.108901034Z" level=info msg="RemovePodSandbox \"0ce59c5fe7a15b4a3cc5e6129f0a06869556a9ded287d9c262bdb20299b84be4\" returns successfully" Jan 17 12:03:18.109799 containerd[2021]: time="2025-01-17T12:03:18.109738994Z" level=info msg="StopPodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\"" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.179 [WARNING][5953] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f06e3531-08e0-4afd-9376-50b984ff63bd", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49", Pod:"coredns-76f75df574-wmqjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4020e8bed3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.179 [INFO][5953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.179 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" iface="eth0" netns="" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.179 [INFO][5953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.179 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.220 [INFO][5959] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.220 [INFO][5959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.220 [INFO][5959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.237 [WARNING][5959] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.237 [INFO][5959] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.239 [INFO][5959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.245206 containerd[2021]: 2025-01-17 12:03:18.241 [INFO][5953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.245206 containerd[2021]: time="2025-01-17T12:03:18.244480767Z" level=info msg="TearDown network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" successfully" Jan 17 12:03:18.245206 containerd[2021]: time="2025-01-17T12:03:18.244518639Z" level=info msg="StopPodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" returns successfully" Jan 17 12:03:18.246658 containerd[2021]: time="2025-01-17T12:03:18.246595383Z" level=info msg="RemovePodSandbox for \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\"" Jan 17 12:03:18.246732 containerd[2021]: time="2025-01-17T12:03:18.246658287Z" level=info msg="Forcibly stopping sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\"" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.320 [WARNING][5977] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f06e3531-08e0-4afd-9376-50b984ff63bd", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"27e7c4fb6c174f373def17e39557503033ec8a61f82db43ac70d1ec2ed651d49", Pod:"coredns-76f75df574-wmqjg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4020e8bed3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.320 [INFO][5977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.320 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" iface="eth0" netns="" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.321 [INFO][5977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.321 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.362 [INFO][5984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.362 [INFO][5984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.363 [INFO][5984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.377 [WARNING][5984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.377 [INFO][5984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" HandleID="k8s-pod-network.7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--wmqjg-eth0" Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.379 [INFO][5984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.385324 containerd[2021]: 2025-01-17 12:03:18.382 [INFO][5977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be" Jan 17 12:03:18.386794 containerd[2021]: time="2025-01-17T12:03:18.385356435Z" level=info msg="TearDown network for sandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" successfully" Jan 17 12:03:18.390218 containerd[2021]: time="2025-01-17T12:03:18.390107956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:18.390381 containerd[2021]: time="2025-01-17T12:03:18.390311800Z" level=info msg="RemovePodSandbox \"7fad50069df99b7d13f1ff2e6327754f99770d255ff50f8c449d6f9aee1114be\" returns successfully" Jan 17 12:03:18.392967 containerd[2021]: time="2025-01-17T12:03:18.392424448Z" level=info msg="StopPodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\"" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.464 [WARNING][6002] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf2b9539-4e3c-4e81-a355-422ae8f49174", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a", Pod:"calico-apiserver-5dd85d45b4-xw927", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic359e3322c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.464 [INFO][6002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.464 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" iface="eth0" netns="" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.464 [INFO][6002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.464 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.508 [INFO][6008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.508 [INFO][6008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.508 [INFO][6008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.524 [WARNING][6008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.524 [INFO][6008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.529 [INFO][6008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.536130 containerd[2021]: 2025-01-17 12:03:18.532 [INFO][6002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.537476 containerd[2021]: time="2025-01-17T12:03:18.537301624Z" level=info msg="TearDown network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" successfully" Jan 17 12:03:18.537476 containerd[2021]: time="2025-01-17T12:03:18.537351076Z" level=info msg="StopPodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" returns successfully" Jan 17 12:03:18.539214 containerd[2021]: time="2025-01-17T12:03:18.538659412Z" level=info msg="RemovePodSandbox for \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\"" Jan 17 12:03:18.539214 containerd[2021]: time="2025-01-17T12:03:18.538752304Z" level=info msg="Forcibly stopping sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\"" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.604 [WARNING][6026] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0", GenerateName:"calico-apiserver-5dd85d45b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf2b9539-4e3c-4e81-a355-422ae8f49174", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd85d45b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"5c8e4985ab39535cf6ac458240c24fe05ede6bf599204371c973b92972bd426a", Pod:"calico-apiserver-5dd85d45b4-xw927", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic359e3322c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.604 [INFO][6026] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.604 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" iface="eth0" netns="" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.604 [INFO][6026] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.604 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.639 [INFO][6032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.639 [INFO][6032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.639 [INFO][6032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.656 [WARNING][6032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.656 [INFO][6032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" HandleID="k8s-pod-network.22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Workload="ip--172--31--30--222-k8s-calico--apiserver--5dd85d45b4--xw927-eth0" Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.663 [INFO][6032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.671077 containerd[2021]: 2025-01-17 12:03:18.666 [INFO][6026] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05" Jan 17 12:03:18.671077 containerd[2021]: time="2025-01-17T12:03:18.669535685Z" level=info msg="TearDown network for sandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" successfully" Jan 17 12:03:18.677262 containerd[2021]: time="2025-01-17T12:03:18.677204489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:18.677558 containerd[2021]: time="2025-01-17T12:03:18.677523233Z" level=info msg="RemovePodSandbox \"22501b22a992bba03ff57044b4eb7d054bb6b32974751fca7c1d78d0f7fdeb05\" returns successfully" Jan 17 12:03:18.679443 containerd[2021]: time="2025-01-17T12:03:18.678734801Z" level=info msg="StopPodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\"" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.748 [WARNING][6050] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"238a61a5-b6b5-4a74-b87d-37070ed73575", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f", Pod:"coredns-76f75df574-m26cv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5befe37be77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.749 [INFO][6050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.749 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" iface="eth0" netns="" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.749 [INFO][6050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.749 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.793 [INFO][6056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.794 [INFO][6056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.794 [INFO][6056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.811 [WARNING][6056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.811 [INFO][6056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.814 [INFO][6056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.819287 containerd[2021]: 2025-01-17 12:03:18.816 [INFO][6050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.819287 containerd[2021]: time="2025-01-17T12:03:18.819069786Z" level=info msg="TearDown network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" successfully" Jan 17 12:03:18.819287 containerd[2021]: time="2025-01-17T12:03:18.819109290Z" level=info msg="StopPodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" returns successfully" Jan 17 12:03:18.821893 containerd[2021]: time="2025-01-17T12:03:18.820883142Z" level=info msg="RemovePodSandbox for \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\"" Jan 17 12:03:18.821893 containerd[2021]: time="2025-01-17T12:03:18.820956270Z" level=info msg="Forcibly stopping sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\"" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.892 [WARNING][6075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"238a61a5-b6b5-4a74-b87d-37070ed73575", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-222", ContainerID:"c23bca7cb7ffddbad210d24c00e971daec6a7c67640a1d4b348c61f82430067f", Pod:"coredns-76f75df574-m26cv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5befe37be77", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.893 [INFO][6075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.893 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" iface="eth0" netns="" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.893 [INFO][6075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.893 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.931 [INFO][6081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.932 [INFO][6081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.932 [INFO][6081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.944 [WARNING][6081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.944 [INFO][6081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" HandleID="k8s-pod-network.96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Workload="ip--172--31--30--222-k8s-coredns--76f75df574--m26cv-eth0" Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.947 [INFO][6081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:18.951684 containerd[2021]: 2025-01-17 12:03:18.949 [INFO][6075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c" Jan 17 12:03:18.951684 containerd[2021]: time="2025-01-17T12:03:18.951670710Z" level=info msg="TearDown network for sandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" successfully" Jan 17 12:03:18.958777 containerd[2021]: time="2025-01-17T12:03:18.958691346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:18.958952 containerd[2021]: time="2025-01-17T12:03:18.958812342Z" level=info msg="RemovePodSandbox \"96094c9e230a1dbd36ab14e66020bc520b79aa220b1747b560596e421f5a787c\" returns successfully" Jan 17 12:03:19.102583 systemd[1]: Started sshd@11-172.31.30.222:22-139.178.68.195:35850.service - OpenSSH per-connection server daemon (139.178.68.195:35850). Jan 17 12:03:19.293772 sshd[6089]: Accepted publickey for core from 139.178.68.195 port 35850 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:19.297235 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:19.306742 systemd-logind[1994]: New session 12 of user core. Jan 17 12:03:19.313296 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:03:19.571661 sshd[6089]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:19.579475 systemd[1]: sshd@11-172.31.30.222:22-139.178.68.195:35850.service: Deactivated successfully. Jan 17 12:03:19.583816 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:03:19.585987 systemd-logind[1994]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:03:19.588710 systemd-logind[1994]: Removed session 12. Jan 17 12:03:19.610568 systemd[1]: Started sshd@12-172.31.30.222:22-139.178.68.195:35854.service - OpenSSH per-connection server daemon (139.178.68.195:35854). Jan 17 12:03:19.788481 sshd[6103]: Accepted publickey for core from 139.178.68.195 port 35854 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:19.791418 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:19.800254 systemd-logind[1994]: New session 13 of user core. Jan 17 12:03:19.806312 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:03:20.122470 sshd[6103]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:20.131727 systemd[1]: sshd@12-172.31.30.222:22-139.178.68.195:35854.service: Deactivated successfully. Jan 17 12:03:20.140779 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:03:20.147665 systemd-logind[1994]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:03:20.177201 systemd[1]: Started sshd@13-172.31.30.222:22-139.178.68.195:35870.service - OpenSSH per-connection server daemon (139.178.68.195:35870). Jan 17 12:03:20.181312 systemd-logind[1994]: Removed session 13. Jan 17 12:03:20.363689 sshd[6114]: Accepted publickey for core from 139.178.68.195 port 35870 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:20.366984 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:20.397499 systemd[1]: run-containerd-runc-k8s.io-f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c-runc.TjEr16.mount: Deactivated successfully. Jan 17 12:03:20.424342 systemd-logind[1994]: New session 14 of user core. Jan 17 12:03:20.430258 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:03:20.678569 sshd[6114]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:20.687705 systemd[1]: sshd@13-172.31.30.222:22-139.178.68.195:35870.service: Deactivated successfully. Jan 17 12:03:20.691425 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:03:20.693760 systemd-logind[1994]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:03:20.697664 systemd-logind[1994]: Removed session 14. Jan 17 12:03:25.720652 systemd[1]: Started sshd@14-172.31.30.222:22-139.178.68.195:36958.service - OpenSSH per-connection server daemon (139.178.68.195:36958). Jan 17 12:03:25.896177 sshd[6154]: Accepted publickey for core from 139.178.68.195 port 36958 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:25.898935 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:25.909367 systemd-logind[1994]: New session 15 of user core. Jan 17 12:03:25.914367 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:03:26.161290 sshd[6154]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:26.169775 systemd[1]: sshd@14-172.31.30.222:22-139.178.68.195:36958.service: Deactivated successfully. Jan 17 12:03:26.170175 systemd-logind[1994]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:03:26.175383 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:03:26.183196 systemd-logind[1994]: Removed session 15. Jan 17 12:03:31.199629 systemd[1]: Started sshd@15-172.31.30.222:22-139.178.68.195:36970.service - OpenSSH per-connection server daemon (139.178.68.195:36970). Jan 17 12:03:31.386229 sshd[6202]: Accepted publickey for core from 139.178.68.195 port 36970 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:31.389074 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:31.402396 systemd-logind[1994]: New session 16 of user core. Jan 17 12:03:31.410341 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:03:31.670742 sshd[6202]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:31.677605 systemd[1]: sshd@15-172.31.30.222:22-139.178.68.195:36970.service: Deactivated successfully. Jan 17 12:03:31.682796 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:03:31.684537 systemd-logind[1994]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:03:31.687008 systemd-logind[1994]: Removed session 16. Jan 17 12:03:36.714554 systemd[1]: Started sshd@16-172.31.30.222:22-139.178.68.195:37292.service - OpenSSH per-connection server daemon (139.178.68.195:37292). Jan 17 12:03:36.893704 sshd[6216]: Accepted publickey for core from 139.178.68.195 port 37292 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:36.896574 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:36.905170 systemd-logind[1994]: New session 17 of user core. Jan 17 12:03:36.912283 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:03:37.160938 sshd[6216]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:37.166227 systemd[1]: sshd@16-172.31.30.222:22-139.178.68.195:37292.service: Deactivated successfully. Jan 17 12:03:37.169844 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:03:37.174809 systemd-logind[1994]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:03:37.176953 systemd-logind[1994]: Removed session 17. Jan 17 12:03:38.848108 kubelet[3390]: I0117 12:03:38.847630 3390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:42.202148 systemd[1]: Started sshd@17-172.31.30.222:22-139.178.68.195:37296.service - OpenSSH per-connection server daemon (139.178.68.195:37296). Jan 17 12:03:42.290644 kubelet[3390]: I0117 12:03:42.290599 3390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:42.374293 sshd[6231]: Accepted publickey for core from 139.178.68.195 port 37296 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:42.376242 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:42.388510 systemd-logind[1994]: New session 18 of user core. Jan 17 12:03:42.397288 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:03:42.662229 sshd[6231]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:42.670293 systemd[1]: sshd@17-172.31.30.222:22-139.178.68.195:37296.service: Deactivated successfully. Jan 17 12:03:42.674681 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:03:42.676428 systemd-logind[1994]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:03:42.678171 systemd-logind[1994]: Removed session 18. Jan 17 12:03:47.702508 systemd[1]: Started sshd@18-172.31.30.222:22-139.178.68.195:33480.service - OpenSSH per-connection server daemon (139.178.68.195:33480). Jan 17 12:03:47.889634 sshd[6246]: Accepted publickey for core from 139.178.68.195 port 33480 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:47.892740 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:47.904580 systemd-logind[1994]: New session 19 of user core. Jan 17 12:03:47.910364 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:03:48.208431 sshd[6246]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:48.215588 systemd[1]: sshd@18-172.31.30.222:22-139.178.68.195:33480.service: Deactivated successfully. Jan 17 12:03:48.222385 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:03:48.228858 systemd-logind[1994]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:03:48.250669 systemd[1]: Started sshd@19-172.31.30.222:22-139.178.68.195:33494.service - OpenSSH per-connection server daemon (139.178.68.195:33494). Jan 17 12:03:48.254957 systemd-logind[1994]: Removed session 19. Jan 17 12:03:48.439189 sshd[6263]: Accepted publickey for core from 139.178.68.195 port 33494 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:48.442220 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:48.452217 systemd-logind[1994]: New session 20 of user core. Jan 17 12:03:48.457854 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:03:49.010615 sshd[6263]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:49.017720 systemd[1]: sshd@19-172.31.30.222:22-139.178.68.195:33494.service: Deactivated successfully. Jan 17 12:03:49.022794 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:03:49.025292 systemd-logind[1994]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:03:49.028250 systemd-logind[1994]: Removed session 20. Jan 17 12:03:49.050645 systemd[1]: Started sshd@20-172.31.30.222:22-139.178.68.195:33498.service - OpenSSH per-connection server daemon (139.178.68.195:33498). Jan 17 12:03:49.247315 sshd[6273]: Accepted publickey for core from 139.178.68.195 port 33498 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:49.250587 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:49.262781 systemd-logind[1994]: New session 21 of user core. Jan 17 12:03:49.269315 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:03:52.943668 sshd[6273]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:52.951716 systemd[1]: sshd@20-172.31.30.222:22-139.178.68.195:33498.service: Deactivated successfully. Jan 17 12:03:52.960602 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:03:52.963227 systemd[1]: session-21.scope: Consumed 1.085s CPU time. Jan 17 12:03:52.965594 systemd-logind[1994]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:03:52.988880 systemd[1]: Started sshd@21-172.31.30.222:22-139.178.68.195:33504.service - OpenSSH per-connection server daemon (139.178.68.195:33504). Jan 17 12:03:52.992202 systemd-logind[1994]: Removed session 21. Jan 17 12:03:53.178502 sshd[6315]: Accepted publickey for core from 139.178.68.195 port 33504 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:53.182292 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:53.197458 systemd-logind[1994]: New session 22 of user core. Jan 17 12:03:53.202242 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:03:53.776068 sshd[6315]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:53.786913 systemd[1]: sshd@21-172.31.30.222:22-139.178.68.195:33504.service: Deactivated successfully. Jan 17 12:03:53.793871 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:03:53.800002 systemd-logind[1994]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:03:53.823347 systemd[1]: Started sshd@22-172.31.30.222:22-139.178.68.195:33516.service - OpenSSH per-connection server daemon (139.178.68.195:33516). Jan 17 12:03:53.828221 systemd-logind[1994]: Removed session 22. Jan 17 12:03:54.019609 sshd[6327]: Accepted publickey for core from 139.178.68.195 port 33516 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:54.025239 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:54.035590 systemd-logind[1994]: New session 23 of user core. Jan 17 12:03:54.044359 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:03:54.347488 sshd[6327]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:54.358157 systemd-logind[1994]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:03:54.362533 systemd[1]: sshd@22-172.31.30.222:22-139.178.68.195:33516.service: Deactivated successfully. Jan 17 12:03:54.370813 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:03:54.375489 systemd-logind[1994]: Removed session 23. Jan 17 12:03:57.623191 systemd[1]: run-containerd-runc-k8s.io-f3dd846c6ea2fec54f0d1bd5e4e3cea309925649ad5233c0d88ea9defeed5f2c-runc.EZqYfT.mount: Deactivated successfully. Jan 17 12:03:59.398529 systemd[1]: Started sshd@23-172.31.30.222:22-139.178.68.195:44706.service - OpenSSH per-connection server daemon (139.178.68.195:44706). Jan 17 12:03:59.596171 sshd[6359]: Accepted publickey for core from 139.178.68.195 port 44706 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:59.599909 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:59.611077 systemd-logind[1994]: New session 24 of user core. Jan 17 12:03:59.616328 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:03:59.867945 sshd[6359]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:59.873776 systemd[1]: sshd@23-172.31.30.222:22-139.178.68.195:44706.service: Deactivated successfully. Jan 17 12:03:59.877582 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:03:59.881408 systemd-logind[1994]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:03:59.883626 systemd-logind[1994]: Removed session 24. Jan 17 12:04:04.909559 systemd[1]: Started sshd@24-172.31.30.222:22-139.178.68.195:40806.service - OpenSSH per-connection server daemon (139.178.68.195:40806). Jan 17 12:04:05.086518 sshd[6401]: Accepted publickey for core from 139.178.68.195 port 40806 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:05.089347 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:05.098015 systemd-logind[1994]: New session 25 of user core. Jan 17 12:04:05.105504 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:04:05.342529 sshd[6401]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:05.348571 systemd[1]: sshd@24-172.31.30.222:22-139.178.68.195:40806.service: Deactivated successfully. Jan 17 12:04:05.352855 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:04:05.357285 systemd-logind[1994]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:04:05.359644 systemd-logind[1994]: Removed session 25. Jan 17 12:04:10.383727 systemd[1]: Started sshd@25-172.31.30.222:22-139.178.68.195:40810.service - OpenSSH per-connection server daemon (139.178.68.195:40810). Jan 17 12:04:10.551855 sshd[6414]: Accepted publickey for core from 139.178.68.195 port 40810 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:10.554853 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:10.563081 systemd-logind[1994]: New session 26 of user core. Jan 17 12:04:10.572409 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:04:10.858421 sshd[6414]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:10.866617 systemd-logind[1994]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:04:10.867551 systemd[1]: sshd@25-172.31.30.222:22-139.178.68.195:40810.service: Deactivated successfully. Jan 17 12:04:10.876475 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:04:10.883628 systemd-logind[1994]: Removed session 26. Jan 17 12:04:15.900821 systemd[1]: Started sshd@26-172.31.30.222:22-139.178.68.195:51498.service - OpenSSH per-connection server daemon (139.178.68.195:51498). Jan 17 12:04:16.076831 sshd[6427]: Accepted publickey for core from 139.178.68.195 port 51498 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:16.079616 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:16.088254 systemd-logind[1994]: New session 27 of user core. Jan 17 12:04:16.097312 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:04:16.341387 sshd[6427]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:16.348447 systemd-logind[1994]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:04:16.349804 systemd[1]: sshd@26-172.31.30.222:22-139.178.68.195:51498.service: Deactivated successfully. Jan 17 12:04:16.356043 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:04:16.359275 systemd-logind[1994]: Removed session 27. Jan 17 12:04:21.382563 systemd[1]: Started sshd@27-172.31.30.222:22-139.178.68.195:51502.service - OpenSSH per-connection server daemon (139.178.68.195:51502). Jan 17 12:04:21.557853 sshd[6460]: Accepted publickey for core from 139.178.68.195 port 51502 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:21.560683 sshd[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:21.568305 systemd-logind[1994]: New session 28 of user core. Jan 17 12:04:21.575348 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:04:21.815355 sshd[6460]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:21.822298 systemd[1]: sshd@27-172.31.30.222:22-139.178.68.195:51502.service: Deactivated successfully. Jan 17 12:04:21.827646 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:04:21.830002 systemd-logind[1994]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:04:21.832005 systemd-logind[1994]: Removed session 28. Jan 17 12:04:26.857583 systemd[1]: Started sshd@28-172.31.30.222:22-139.178.68.195:54970.service - OpenSSH per-connection server daemon (139.178.68.195:54970). Jan 17 12:04:27.035631 sshd[6473]: Accepted publickey for core from 139.178.68.195 port 54970 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:27.038424 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:27.046129 systemd-logind[1994]: New session 29 of user core. Jan 17 12:04:27.053993 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 12:04:27.292943 sshd[6473]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:27.301205 systemd-logind[1994]: Session 29 logged out. Waiting for processes to exit. Jan 17 12:04:27.302690 systemd[1]: sshd@28-172.31.30.222:22-139.178.68.195:54970.service: Deactivated successfully. Jan 17 12:04:27.307866 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 12:04:27.310699 systemd-logind[1994]: Removed session 29. Jan 17 12:04:41.847972 systemd[1]: cri-containerd-125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f.scope: Deactivated successfully. Jan 17 12:04:41.848477 systemd[1]: cri-containerd-125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f.scope: Consumed 6.609s CPU time. Jan 17 12:04:41.888711 containerd[2021]: time="2025-01-17T12:04:41.888611558Z" level=info msg="shim disconnected" id=125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f namespace=k8s.io Jan 17 12:04:41.888711 containerd[2021]: time="2025-01-17T12:04:41.888700082Z" level=warning msg="cleaning up after shim disconnected" id=125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f namespace=k8s.io Jan 17 12:04:41.889402 containerd[2021]: time="2025-01-17T12:04:41.888729242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:41.893004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f-rootfs.mount: Deactivated successfully. Jan 17 12:04:42.399321 systemd[1]: cri-containerd-36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8.scope: Deactivated successfully. Jan 17 12:04:42.400853 systemd[1]: cri-containerd-36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8.scope: Consumed 4.472s CPU time, 21.9M memory peak, 0B memory swap peak. Jan 17 12:04:42.448126 containerd[2021]: time="2025-01-17T12:04:42.447638485Z" level=info msg="shim disconnected" id=36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8 namespace=k8s.io Jan 17 12:04:42.448126 containerd[2021]: time="2025-01-17T12:04:42.447753373Z" level=warning msg="cleaning up after shim disconnected" id=36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8 namespace=k8s.io Jan 17 12:04:42.448126 containerd[2021]: time="2025-01-17T12:04:42.447798925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:42.450016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8-rootfs.mount: Deactivated successfully. Jan 17 12:04:42.925424 kubelet[3390]: I0117 12:04:42.925372 3390 scope.go:117] "RemoveContainer" containerID="36bfa6cc6c499817827a45faaa49e2b90c4d1ae4ccab8508c35496821ed234e8" Jan 17 12:04:42.931261 kubelet[3390]: I0117 12:04:42.930780 3390 scope.go:117] "RemoveContainer" containerID="125278f397b877b9b680c898fe96e9c3b1e853cb76bfa003bfeacef19f30907f" Jan 17 12:04:42.931581 containerd[2021]: time="2025-01-17T12:04:42.930802215Z" level=info msg="CreateContainer within sandbox \"509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 12:04:42.944161 containerd[2021]: time="2025-01-17T12:04:42.942835203Z" level=info msg="CreateContainer within sandbox \"567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 12:04:42.978068 containerd[2021]: time="2025-01-17T12:04:42.975810808Z" level=info msg="CreateContainer within sandbox \"509b60fd501727a3a52c663eb0cae84783aa50fe06c5aa2bb5b0a2a9c6bee16c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c3a192a09516ea02dc427dfb7d8c5330bce553e4227953f7fe0b66d4cc6b40cb\"" Jan 17 12:04:42.979499 containerd[2021]: time="2025-01-17T12:04:42.978889252Z" level=info msg="StartContainer for \"c3a192a09516ea02dc427dfb7d8c5330bce553e4227953f7fe0b66d4cc6b40cb\"" Jan 17 12:04:43.011627 containerd[2021]: time="2025-01-17T12:04:43.011553312Z" level=info msg="CreateContainer within sandbox \"567562f5cbe33788cf6e08e86b27ec40ce3397dbdbe88f6bb9efd5f60b8ea27c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"02dd5e95bbc4290622b5005898ee839ef15b9c2d876736c1a441839a3921d495\"" Jan 17 12:04:43.013318 containerd[2021]: time="2025-01-17T12:04:43.012300660Z" level=info msg="StartContainer for \"02dd5e95bbc4290622b5005898ee839ef15b9c2d876736c1a441839a3921d495\"" Jan 17 12:04:43.037705 systemd[1]: Started cri-containerd-c3a192a09516ea02dc427dfb7d8c5330bce553e4227953f7fe0b66d4cc6b40cb.scope - libcontainer container c3a192a09516ea02dc427dfb7d8c5330bce553e4227953f7fe0b66d4cc6b40cb. Jan 17 12:04:43.091334 systemd[1]: Started cri-containerd-02dd5e95bbc4290622b5005898ee839ef15b9c2d876736c1a441839a3921d495.scope - libcontainer container 02dd5e95bbc4290622b5005898ee839ef15b9c2d876736c1a441839a3921d495. Jan 17 12:04:43.137367 containerd[2021]: time="2025-01-17T12:04:43.137295228Z" level=info msg="StartContainer for \"c3a192a09516ea02dc427dfb7d8c5330bce553e4227953f7fe0b66d4cc6b40cb\" returns successfully" Jan 17 12:04:43.179202 containerd[2021]: time="2025-01-17T12:04:43.178513321Z" level=info msg="StartContainer for \"02dd5e95bbc4290622b5005898ee839ef15b9c2d876736c1a441839a3921d495\" returns successfully" Jan 17 12:04:47.644846 systemd[1]: cri-containerd-34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7.scope: Deactivated successfully. Jan 17 12:04:47.645362 systemd[1]: cri-containerd-34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7.scope: Consumed 3.111s CPU time, 15.9M memory peak, 0B memory swap peak. Jan 17 12:04:47.689664 containerd[2021]: time="2025-01-17T12:04:47.687710467Z" level=info msg="shim disconnected" id=34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7 namespace=k8s.io Jan 17 12:04:47.689664 containerd[2021]: time="2025-01-17T12:04:47.689376247Z" level=warning msg="cleaning up after shim disconnected" id=34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7 namespace=k8s.io Jan 17 12:04:47.689664 containerd[2021]: time="2025-01-17T12:04:47.689406295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:47.692691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7-rootfs.mount: Deactivated successfully. Jan 17 12:04:47.956455 kubelet[3390]: I0117 12:04:47.956317 3390 scope.go:117] "RemoveContainer" containerID="34244c422ddbd76ab16f58259eac63407fc0805b15769780693f2a9df60c03a7" Jan 17 12:04:47.974524 containerd[2021]: time="2025-01-17T12:04:47.974414852Z" level=info msg="CreateContainer within sandbox \"9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 12:04:48.003785 containerd[2021]: time="2025-01-17T12:04:48.003719609Z" level=info msg="CreateContainer within sandbox \"9937e39789a58028067faa801c30a352fb243cb9c5036eed34a4ca92466d95af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"043307e08134bc4eb9a267f30cf09c031393295e718b28fb78a2db3c41c2ae8c\"" Jan 17 12:04:48.004399 containerd[2021]: time="2025-01-17T12:04:48.004355237Z" level=info msg="StartContainer for \"043307e08134bc4eb9a267f30cf09c031393295e718b28fb78a2db3c41c2ae8c\"" Jan 17 12:04:48.059334 systemd[1]: Started cri-containerd-043307e08134bc4eb9a267f30cf09c031393295e718b28fb78a2db3c41c2ae8c.scope - libcontainer container 043307e08134bc4eb9a267f30cf09c031393295e718b28fb78a2db3c41c2ae8c. Jan 17 12:04:48.122456 containerd[2021]: time="2025-01-17T12:04:48.122338505Z" level=info msg="StartContainer for \"043307e08134bc4eb9a267f30cf09c031393295e718b28fb78a2db3c41c2ae8c\" returns successfully" Jan 17 12:04:49.712897 kubelet[3390]: E0117 12:04:49.711805 3390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"