Jan 29 12:02:55.289419 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 12:02:55.289478 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 12:02:55.289505 kernel: KASLR disabled due to lack of seed Jan 29 12:02:55.289522 kernel: efi: EFI v2.7 by EDK II Jan 29 12:02:55.289539 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 29 12:02:55.289556 kernel: ACPI: Early table checksum verification disabled Jan 29 12:02:55.289575 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 12:02:55.289593 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 12:02:55.289610 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 12:02:55.289627 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 12:02:55.289655 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 12:02:55.289672 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 12:02:55.289690 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 12:02:55.289708 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 12:02:55.289729 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 12:02:55.289755 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 12:02:55.289774 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 12:02:55.289792 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 12:02:55.289810 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 12:02:55.289830 kernel: printk: bootconsole [uart0] enabled Jan 29 12:02:55.289847 kernel: NUMA: Failed to initialise from firmware Jan 29 12:02:55.289866 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 12:02:55.289885 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 12:02:55.289904 kernel: Zone ranges: Jan 29 12:02:55.289922 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 12:02:55.289939 kernel: DMA32 empty Jan 29 12:02:55.289967 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 12:02:55.289985 kernel: Movable zone start for each node Jan 29 12:02:55.290004 kernel: Early memory node ranges Jan 29 12:02:55.290022 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 12:02:55.290041 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 12:02:55.290059 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 12:02:55.290077 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 12:02:55.290095 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 12:02:55.290156 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 12:02:55.290180 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 12:02:55.290201 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 12:02:55.290220 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 12:02:55.290251 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 12:02:55.290272 kernel: psci: probing for conduit method from ACPI. Jan 29 12:02:55.290303 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 12:02:55.290324 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 12:02:55.290346 kernel: psci: Trusted OS migration not required Jan 29 12:02:55.290373 kernel: psci: SMC Calling Convention v1.1 Jan 29 12:02:55.290394 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 12:02:55.290413 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 12:02:55.290434 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 12:02:55.290453 kernel: Detected PIPT I-cache on CPU0 Jan 29 12:02:55.290473 kernel: CPU features: detected: GIC system register CPU interface Jan 29 12:02:55.290494 kernel: CPU features: detected: Spectre-v2 Jan 29 12:02:55.290512 kernel: CPU features: detected: Spectre-v3a Jan 29 12:02:55.290532 kernel: CPU features: detected: Spectre-BHB Jan 29 12:02:55.290550 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 12:02:55.290568 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 12:02:55.290593 kernel: alternatives: applying boot alternatives Jan 29 12:02:55.290613 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:02:55.290633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:02:55.290651 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:02:55.290669 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:02:55.290686 kernel: Fallback order for Node 0: 0 Jan 29 12:02:55.290704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 12:02:55.290721 kernel: Policy zone: Normal Jan 29 12:02:55.290739 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:02:55.290756 kernel: software IO TLB: area num 2. Jan 29 12:02:55.290774 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 12:02:55.290797 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 29 12:02:55.290815 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:02:55.290832 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:02:55.290851 kernel: rcu: RCU event tracing is enabled. Jan 29 12:02:55.290870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:02:55.290887 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:02:55.290906 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:02:55.290923 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:02:55.290941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:02:55.290959 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 12:02:55.290976 kernel: GICv3: 96 SPIs implemented Jan 29 12:02:55.290998 kernel: GICv3: 0 Extended SPIs implemented Jan 29 12:02:55.291016 kernel: Root IRQ handler: gic_handle_irq Jan 29 12:02:55.291033 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 12:02:55.291051 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 12:02:55.291068 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 12:02:55.291086 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 12:02:55.291104 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 12:02:55.291180 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 12:02:55.291199 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 12:02:55.291217 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 12:02:55.291234 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:02:55.291252 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 12:02:55.291278 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 12:02:55.291297 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 12:02:55.291315 kernel: Console: colour dummy device 80x25 Jan 29 12:02:55.291333 kernel: printk: console [tty1] enabled Jan 29 12:02:55.291351 kernel: ACPI: Core revision 20230628 Jan 29 12:02:55.291370 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 12:02:55.291388 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:02:55.291406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:02:55.291424 kernel: landlock: Up and running. Jan 29 12:02:55.291447 kernel: SELinux: Initializing. Jan 29 12:02:55.291466 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:02:55.291484 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:02:55.291502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:02:55.291520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:02:55.291539 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:02:55.291557 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:02:55.291575 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 12:02:55.291593 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 12:02:55.291615 kernel: Remapping and enabling EFI services. Jan 29 12:02:55.291634 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:02:55.291651 kernel: Detected PIPT I-cache on CPU1 Jan 29 12:02:55.291670 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 12:02:55.291688 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 12:02:55.291706 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 12:02:55.291724 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:02:55.291742 kernel: SMP: Total of 2 processors activated. Jan 29 12:02:55.291760 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 12:02:55.291783 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 12:02:55.291803 kernel: CPU features: detected: CRC32 instructions Jan 29 12:02:55.291822 kernel: CPU: All CPU(s) started at EL1 Jan 29 12:02:55.291856 kernel: alternatives: applying system-wide alternatives Jan 29 12:02:55.291881 kernel: devtmpfs: initialized Jan 29 12:02:55.291926 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:02:55.291948 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:02:55.291967 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:02:55.291987 kernel: SMBIOS 3.0.0 present. Jan 29 12:02:55.292007 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 12:02:55.292037 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:02:55.292058 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 12:02:55.292079 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 12:02:55.292098 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 12:02:55.292163 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:02:55.292190 kernel: audit: type=2000 audit(0.334:1): state=initialized audit_enabled=0 res=1 Jan 29 12:02:55.292211 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:02:55.292242 kernel: cpuidle: using governor menu Jan 29 12:02:55.292265 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 12:02:55.292285 kernel: ASID allocator initialised with 65536 entries Jan 29 12:02:55.292306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:02:55.292328 kernel: Serial: AMBA PL011 UART driver Jan 29 12:02:55.292349 kernel: Modules: 17520 pages in range for non-PLT usage Jan 29 12:02:55.292371 kernel: Modules: 509040 pages in range for PLT usage Jan 29 12:02:55.292391 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:02:55.292412 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:02:55.292442 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 12:02:55.292463 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 12:02:55.292485 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:02:55.292506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:02:55.292527 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 12:02:55.292549 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 12:02:55.292570 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:02:55.292591 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:02:55.292612 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:02:55.292642 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:02:55.292664 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:02:55.292685 kernel: ACPI: Interpreter enabled Jan 29 12:02:55.292705 kernel: ACPI: Using GIC for interrupt routing Jan 29 12:02:55.292726 kernel: ACPI: MCFG table detected, 1 entries Jan 29 12:02:55.292747 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 12:02:55.293182 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:02:55.293451 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 12:02:55.293673 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 12:02:55.293882 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 12:02:55.294089 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 12:02:55.294146 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 12:02:55.294170 kernel: acpiphp: Slot [1] registered Jan 29 12:02:55.294193 kernel: acpiphp: Slot [2] registered Jan 29 12:02:55.294213 kernel: acpiphp: Slot [3] registered Jan 29 12:02:55.294234 kernel: acpiphp: Slot [4] registered Jan 29 12:02:55.294266 kernel: acpiphp: Slot [5] registered Jan 29 12:02:55.294287 kernel: acpiphp: Slot [6] registered Jan 29 12:02:55.294307 kernel: acpiphp: Slot [7] registered Jan 29 12:02:55.294326 kernel: acpiphp: Slot [8] registered Jan 29 12:02:55.294346 kernel: acpiphp: Slot [9] registered Jan 29 12:02:55.294365 kernel: acpiphp: Slot [10] registered Jan 29 12:02:55.294384 kernel: acpiphp: Slot [11] registered Jan 29 12:02:55.294404 kernel: acpiphp: Slot [12] registered Jan 29 12:02:55.294424 kernel: acpiphp: Slot [13] registered Jan 29 12:02:55.294446 kernel: acpiphp: Slot [14] registered Jan 29 12:02:55.294473 kernel: acpiphp: Slot [15] registered Jan 29 12:02:55.294493 kernel: acpiphp: Slot [16] registered Jan 29 12:02:55.294513 kernel: acpiphp: Slot [17] registered Jan 29 12:02:55.294533 kernel: acpiphp: Slot [18] registered Jan 29 12:02:55.294553 kernel: acpiphp: Slot [19] registered Jan 29 12:02:55.294573 kernel: acpiphp: Slot [20] registered Jan 29 12:02:55.294593 kernel: acpiphp: Slot [21] registered Jan 29 12:02:55.294613 kernel: acpiphp: Slot [22] registered Jan 29 12:02:55.294633 kernel: acpiphp: Slot [23] registered Jan 29 12:02:55.294660 kernel: acpiphp: Slot [24] registered Jan 29 12:02:55.294680 kernel: acpiphp: Slot [25] registered Jan 29 12:02:55.294700 kernel: acpiphp: Slot [26] registered Jan 29 12:02:55.294720 kernel: acpiphp: Slot [27] registered Jan 29 12:02:55.294741 kernel: acpiphp: Slot [28] registered Jan 29 12:02:55.294762 kernel: acpiphp: Slot [29] registered Jan 29 12:02:55.294783 kernel: acpiphp: Slot [30] registered Jan 29 12:02:55.294804 kernel: acpiphp: Slot [31] registered Jan 29 12:02:55.294824 kernel: PCI host bridge to bus 0000:00 Jan 29 12:02:55.295233 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 12:02:55.295518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 12:02:55.295713 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 12:02:55.295932 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 12:02:55.296252 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 12:02:55.296516 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 12:02:55.296817 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 12:02:55.297215 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 12:02:55.297563 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 12:02:55.297839 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 12:02:55.298074 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 12:02:55.298330 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 12:02:55.298549 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 12:02:55.298780 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 12:02:55.299046 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 12:02:55.299414 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 12:02:55.299728 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 12:02:55.300076 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 12:02:55.300491 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 12:02:55.300708 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 12:02:55.300912 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 12:02:55.301103 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 12:02:55.301326 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 12:02:55.301355 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 12:02:55.301375 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 12:02:55.301397 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 12:02:55.301418 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 12:02:55.301438 kernel: iommu: Default domain type: Translated Jan 29 12:02:55.301459 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 12:02:55.301522 kernel: efivars: Registered efivars operations Jan 29 12:02:55.301568 kernel: vgaarb: loaded Jan 29 12:02:55.301638 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 12:02:55.301682 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:02:55.301705 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:02:55.301725 kernel: pnp: PnP ACPI init Jan 29 12:02:55.302090 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 12:02:55.302273 kernel: pnp: PnP ACPI: found 1 devices Jan 29 12:02:55.302312 kernel: NET: Registered PF_INET protocol family Jan 29 12:02:55.302335 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:02:55.302356 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:02:55.302377 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:02:55.302398 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:02:55.302419 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:02:55.302440 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:02:55.302460 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:02:55.302481 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:02:55.302513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:02:55.302534 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:02:55.302557 kernel: kvm [1]: HYP mode not available Jan 29 12:02:55.302578 kernel: Initialise system trusted keyrings Jan 29 12:02:55.302599 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:02:55.302620 kernel: Key type asymmetric registered Jan 29 12:02:55.302642 kernel: Asymmetric key parser 'x509' registered Jan 29 12:02:55.302664 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 12:02:55.302685 kernel: io scheduler mq-deadline registered Jan 29 12:02:55.302716 kernel: io scheduler kyber registered Jan 29 12:02:55.302738 kernel: io scheduler bfq registered Jan 29 12:02:55.303160 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 12:02:55.303199 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 12:02:55.303222 kernel: ACPI: button: Power Button [PWRB] Jan 29 12:02:55.303241 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 12:02:55.303261 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 12:02:55.303280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:02:55.303309 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 12:02:55.303552 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 12:02:55.303583 kernel: printk: console [ttyS0] disabled Jan 29 12:02:55.303602 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 12:02:55.303622 kernel: printk: console [ttyS0] enabled Jan 29 12:02:55.303642 kernel: printk: bootconsole [uart0] disabled Jan 29 12:02:55.303661 kernel: thunder_xcv, ver 1.0 Jan 29 12:02:55.303680 kernel: thunder_bgx, ver 1.0 Jan 29 12:02:55.303699 kernel: nicpf, ver 1.0 Jan 29 12:02:55.303726 kernel: nicvf, ver 1.0 Jan 29 12:02:55.304008 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 12:02:55.307025 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T12:02:54 UTC (1738152174) Jan 29 12:02:55.307087 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:02:55.307144 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 12:02:55.307180 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 12:02:55.307204 kernel: watchdog: Hard watchdog permanently disabled Jan 29 12:02:55.307225 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:02:55.307261 kernel: Segment Routing with IPv6 Jan 29 12:02:55.307284 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:02:55.307306 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:02:55.307329 kernel: Key type dns_resolver registered Jan 29 12:02:55.307351 kernel: registered taskstats version 1 Jan 29 12:02:55.307373 kernel: Loading compiled-in X.509 certificates Jan 29 12:02:55.307395 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 12:02:55.307417 kernel: Key type .fscrypt registered Jan 29 12:02:55.307439 kernel: Key type fscrypt-provisioning registered Jan 29 12:02:55.307471 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:02:55.307494 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:02:55.307516 kernel: ima: No architecture policies found Jan 29 12:02:55.307538 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 12:02:55.307556 kernel: clk: Disabling unused clocks Jan 29 12:02:55.307575 kernel: Freeing unused kernel memory: 39360K Jan 29 12:02:55.307594 kernel: Run /init as init process Jan 29 12:02:55.307613 kernel: with arguments: Jan 29 12:02:55.307631 kernel: /init Jan 29 12:02:55.307649 kernel: with environment: Jan 29 12:02:55.307673 kernel: HOME=/ Jan 29 12:02:55.307693 kernel: TERM=linux Jan 29 12:02:55.307711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:02:55.307735 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:02:55.307760 systemd[1]: Detected virtualization amazon. Jan 29 12:02:55.307780 systemd[1]: Detected architecture arm64. Jan 29 12:02:55.307800 systemd[1]: Running in initrd. Jan 29 12:02:55.307826 systemd[1]: No hostname configured, using default hostname. Jan 29 12:02:55.307846 systemd[1]: Hostname set to . Jan 29 12:02:55.307867 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:02:55.307906 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:02:55.307933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:02:55.307955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:02:55.307977 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:02:55.307998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:02:55.308025 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:02:55.308047 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:02:55.308071 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:02:55.308092 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:02:55.308147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:02:55.308172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:02:55.308193 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:02:55.308221 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:02:55.308242 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:02:55.308262 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:02:55.308282 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:02:55.308304 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:02:55.308324 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:02:55.308345 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:02:55.308365 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:02:55.308386 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:02:55.308411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:02:55.308432 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:02:55.308453 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:02:55.308474 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:02:55.308494 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:02:55.308515 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:02:55.308535 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:02:55.308556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:02:55.308581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:55.308602 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:02:55.308623 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:02:55.308643 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:02:55.308665 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:02:55.308735 systemd-journald[251]: Collecting audit messages is disabled. Jan 29 12:02:55.308781 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:02:55.308802 systemd-journald[251]: Journal started Jan 29 12:02:55.308851 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2122cf94b5ee39c329140534f6581a) is 8.0M, max 75.3M, 67.3M free. Jan 29 12:02:55.313730 kernel: Bridge firewalling registered Jan 29 12:02:55.265662 systemd-modules-load[252]: Inserted module 'overlay' Jan 29 12:02:55.312668 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 29 12:02:55.327461 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:02:55.330213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:02:55.333543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:55.338219 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:02:55.353682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:02:55.360513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:02:55.376630 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:02:55.390616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:02:55.403224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:02:55.427244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:02:55.444511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:55.460872 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:02:55.465936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:02:55.483626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:02:55.500947 dracut-cmdline[285]: dracut-dracut-053 Jan 29 12:02:55.508409 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:02:55.580016 systemd-resolved[290]: Positive Trust Anchors: Jan 29 12:02:55.582376 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:02:55.585210 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:02:55.704148 kernel: SCSI subsystem initialized Jan 29 12:02:55.710147 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:02:55.723158 kernel: iscsi: registered transport (tcp) Jan 29 12:02:55.746148 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:02:55.746220 kernel: QLogic iSCSI HBA Driver Jan 29 12:02:55.821154 kernel: random: crng init done Jan 29 12:02:55.821583 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 29 12:02:55.825320 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:02:55.829427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:02:55.854742 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:02:55.865509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:02:55.903915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:02:55.903999 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:02:55.905716 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:02:55.977187 kernel: raid6: neonx8 gen() 6672 MB/s Jan 29 12:02:55.994183 kernel: raid6: neonx4 gen() 6528 MB/s Jan 29 12:02:56.011155 kernel: raid6: neonx2 gen() 5452 MB/s Jan 29 12:02:56.028144 kernel: raid6: neonx1 gen() 3953 MB/s Jan 29 12:02:56.045142 kernel: raid6: int64x8 gen() 3815 MB/s Jan 29 12:02:56.062145 kernel: raid6: int64x4 gen() 3708 MB/s Jan 29 12:02:56.079141 kernel: raid6: int64x2 gen() 3599 MB/s Jan 29 12:02:56.096902 kernel: raid6: int64x1 gen() 2750 MB/s Jan 29 12:02:56.096947 kernel: raid6: using algorithm neonx8 gen() 6672 MB/s Jan 29 12:02:56.114879 kernel: raid6: .... xor() 4774 MB/s, rmw enabled Jan 29 12:02:56.114925 kernel: raid6: using neon recovery algorithm Jan 29 12:02:56.123968 kernel: xor: measuring software checksum speed Jan 29 12:02:56.124029 kernel: 8regs : 10965 MB/sec Jan 29 12:02:56.125141 kernel: 32regs : 11099 MB/sec Jan 29 12:02:56.127147 kernel: arm64_neon : 9031 MB/sec Jan 29 12:02:56.127183 kernel: xor: using function: 32regs (11099 MB/sec) Jan 29 12:02:56.210162 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:02:56.234088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:02:56.251651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:02:56.286283 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 29 12:02:56.297542 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:02:56.326871 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:02:56.371437 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jan 29 12:02:56.448300 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:02:56.457433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:02:56.580213 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:02:56.594774 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:02:56.648308 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:02:56.653650 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:02:56.656025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:02:56.658238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:02:56.683432 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:02:56.732988 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:02:56.805095 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 12:02:56.805648 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 12:02:56.838359 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 12:02:56.838638 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 12:02:56.838872 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:28:f3:22:e9:39 Jan 29 12:02:56.825267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:02:56.825509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:56.828282 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:02:56.830440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:02:56.830748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:56.833316 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:56.869875 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:02:56.872012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:02:56.892183 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 12:02:56.894473 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 12:02:56.906175 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 12:02:56.914163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:02:56.914311 kernel: GPT:9289727 != 16777215 Jan 29 12:02:56.914352 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:02:56.914382 kernel: GPT:9289727 != 16777215 Jan 29 12:02:56.916068 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:02:56.916848 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:02:56.928717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:02:56.941635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:02:56.997468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:02:57.044213 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 12:02:57.084160 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (529) Jan 29 12:02:57.145209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 12:02:57.165690 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (519) Jan 29 12:02:57.166064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:02:57.220467 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 12:02:57.222869 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 12:02:57.237439 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:02:57.262763 disk-uuid[659]: Primary Header is updated. Jan 29 12:02:57.262763 disk-uuid[659]: Secondary Entries is updated. Jan 29 12:02:57.262763 disk-uuid[659]: Secondary Header is updated. Jan 29 12:02:57.276172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:02:57.284165 kernel: GPT:disk_guids don't match. Jan 29 12:02:57.284264 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:02:57.284302 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:02:57.295165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:02:58.296168 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 12:02:58.296291 disk-uuid[660]: The operation has completed successfully. Jan 29 12:02:58.493042 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:02:58.493379 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:02:58.569574 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:02:58.590018 sh[1004]: Success Jan 29 12:02:58.664190 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 12:02:58.936016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:02:58.948524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:02:58.960414 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:02:58.992031 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 12:02:58.992131 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:02:58.992174 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:02:58.995052 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:02:58.995087 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:02:59.250161 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:02:59.300631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:02:59.302945 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:02:59.317492 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:02:59.322400 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:02:59.353173 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:02:59.353280 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:02:59.354456 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:02:59.363174 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:02:59.383580 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:02:59.386394 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:02:59.399571 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:02:59.411571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:02:59.535231 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:02:59.561314 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:02:59.621474 systemd-networkd[1196]: lo: Link UP Jan 29 12:02:59.621511 systemd-networkd[1196]: lo: Gained carrier Jan 29 12:02:59.625983 systemd-networkd[1196]: Enumeration completed Jan 29 12:02:59.628050 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:02:59.628058 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:02:59.628442 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:02:59.636988 systemd-networkd[1196]: eth0: Link UP Jan 29 12:02:59.637035 systemd-networkd[1196]: eth0: Gained carrier Jan 29 12:02:59.637062 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:02:59.659279 systemd[1]: Reached target network.target - Network. Jan 29 12:02:59.673284 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.16.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:03:00.628003 ignition[1113]: Ignition 2.19.0 Jan 29 12:03:00.628031 ignition[1113]: Stage: fetch-offline Jan 29 12:03:00.628969 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:00.629000 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:00.629644 ignition[1113]: Ignition finished successfully Jan 29 12:03:00.635612 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:00.649417 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:03:00.682907 ignition[1205]: Ignition 2.19.0 Jan 29 12:03:00.682937 ignition[1205]: Stage: fetch Jan 29 12:03:00.684345 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:00.684375 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:00.684542 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:00.697195 ignition[1205]: PUT result: OK Jan 29 12:03:00.700737 ignition[1205]: parsed url from cmdline: "" Jan 29 12:03:00.700770 ignition[1205]: no config URL provided Jan 29 12:03:00.700790 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:00.700827 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:00.700880 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:00.704707 ignition[1205]: PUT result: OK Jan 29 12:03:00.706429 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 12:03:00.712584 ignition[1205]: GET result: OK Jan 29 12:03:00.713889 ignition[1205]: parsing config with SHA512: ef1d0b57551943718f7881d9b830132f1127d0ce544c22eca6efa5bc713cb754e3bd49eb981ce8287b86bd009a8aba2c33903a769ab40c789655f74cf50507eb Jan 29 12:03:00.721791 unknown[1205]: fetched base config from "system" Jan 29 12:03:00.722516 ignition[1205]: fetch: fetch complete Jan 29 12:03:00.721817 unknown[1205]: fetched base config from "system" Jan 29 12:03:00.722530 ignition[1205]: fetch: fetch passed Jan 29 12:03:00.721830 unknown[1205]: fetched user config from "aws" Jan 29 12:03:00.722619 ignition[1205]: Ignition finished successfully Jan 29 12:03:00.728162 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:03:00.744448 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:03:00.782221 ignition[1211]: Ignition 2.19.0 Jan 29 12:03:00.782263 ignition[1211]: Stage: kargs Jan 29 12:03:00.784338 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:00.784519 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:00.784757 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:00.791910 ignition[1211]: PUT result: OK Jan 29 12:03:00.797038 ignition[1211]: kargs: kargs passed Jan 29 12:03:00.797811 ignition[1211]: Ignition finished successfully Jan 29 12:03:00.802823 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:03:00.813437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:03:00.843323 ignition[1217]: Ignition 2.19.0 Jan 29 12:03:00.843345 ignition[1217]: Stage: disks Jan 29 12:03:00.843975 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:00.844000 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:00.844824 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:00.849183 ignition[1217]: PUT result: OK Jan 29 12:03:00.858478 ignition[1217]: disks: disks passed Jan 29 12:03:00.858653 ignition[1217]: Ignition finished successfully Jan 29 12:03:00.863379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:03:00.870187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:00.874935 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:03:00.877225 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:00.879020 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:00.880872 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:00.898996 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:03:00.936311 systemd-networkd[1196]: eth0: Gained IPv6LL Jan 29 12:03:00.941990 systemd-fsck[1226]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:03:00.948062 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:03:00.960363 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:03:01.051586 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 12:03:01.052752 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:03:01.056520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:01.083392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:01.090339 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:03:01.094283 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:03:01.094428 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:03:01.094501 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:01.116156 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1245) Jan 29 12:03:01.121387 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:03:01.121462 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:03:01.121491 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:01.123356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:03:01.133451 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:03:01.150174 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:01.153143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:02.607850 initrd-setup-root[1269]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:03:02.770947 initrd-setup-root[1276]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:03:02.780509 initrd-setup-root[1283]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:03:02.789810 initrd-setup-root[1290]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:03:04.787193 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:04.797341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:03:04.808428 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:03:04.829745 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:03:04.834172 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:03:04.872163 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:03:04.893356 ignition[1358]: INFO : Ignition 2.19.0 Jan 29 12:03:04.895456 ignition[1358]: INFO : Stage: mount Jan 29 12:03:04.897537 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:04.897537 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:04.901892 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:04.904592 ignition[1358]: INFO : PUT result: OK Jan 29 12:03:04.910148 ignition[1358]: INFO : mount: mount passed Jan 29 12:03:04.911845 ignition[1358]: INFO : Ignition finished successfully Jan 29 12:03:04.917254 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:03:04.930361 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:03:04.965790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:05.002166 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1371) Jan 29 12:03:05.005896 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:03:05.005975 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:03:05.006004 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 12:03:05.012160 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 12:03:05.016221 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:05.060377 ignition[1387]: INFO : Ignition 2.19.0 Jan 29 12:03:05.060377 ignition[1387]: INFO : Stage: files Jan 29 12:03:05.063620 ignition[1387]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:05.063620 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:05.063620 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:05.070029 ignition[1387]: INFO : PUT result: OK Jan 29 12:03:05.074064 ignition[1387]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:03:05.194834 ignition[1387]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:03:05.194834 ignition[1387]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:03:05.561056 ignition[1387]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:03:05.564010 ignition[1387]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:03:05.566856 unknown[1387]: wrote ssh authorized keys file for user: core Jan 29 12:03:05.569165 ignition[1387]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:03:05.572901 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:03:05.576481 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 12:03:05.668889 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:03:05.827507 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:03:05.827507 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 12:03:05.837232 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 12:03:06.322551 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:03:06.717555 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 12:03:06.717555 ignition[1387]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:06.725490 ignition[1387]: INFO : files: files passed Jan 29 12:03:06.725490 ignition[1387]: INFO : Ignition finished successfully Jan 29 12:03:06.741101 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:03:06.762150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:03:06.770186 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:03:06.777735 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:03:06.777994 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:03:06.807406 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:06.807406 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:06.814681 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:06.820177 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:06.825323 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:03:06.835402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:03:06.897667 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:03:06.899452 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:03:06.902618 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:03:06.904660 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:03:06.912648 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:03:06.921490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:03:06.954596 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:06.966579 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:03:06.991289 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:06.995546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:06.999948 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:03:07.001845 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:03:07.002202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:07.007796 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:03:07.013239 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:03:07.015613 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:03:07.021255 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:07.024640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:07.029731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:03:07.032605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:07.039044 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:03:07.042586 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:03:07.048563 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:03:07.050333 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:03:07.050587 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:07.058776 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:07.061790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:07.067732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:03:07.071315 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:07.076211 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:03:07.076474 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:07.082769 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:03:07.083129 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:07.087863 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:03:07.088148 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:03:07.103586 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:03:07.106680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:03:07.107798 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:07.127746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:03:07.136297 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:03:07.136870 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:07.138372 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:03:07.138662 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:07.154619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:03:07.155030 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:03:07.198306 ignition[1441]: INFO : Ignition 2.19.0 Jan 29 12:03:07.201977 ignition[1441]: INFO : Stage: umount Jan 29 12:03:07.201977 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:07.201977 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 12:03:07.201977 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 12:03:07.199235 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:03:07.213149 ignition[1441]: INFO : PUT result: OK Jan 29 12:03:07.220580 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:03:07.221259 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:03:07.224318 ignition[1441]: INFO : umount: umount passed Jan 29 12:03:07.224318 ignition[1441]: INFO : Ignition finished successfully Jan 29 12:03:07.231497 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:03:07.232692 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:03:07.235672 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:03:07.235775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:03:07.238673 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:03:07.239361 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:03:07.247582 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:03:07.247679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:03:07.250475 systemd[1]: Stopped target network.target - Network. Jan 29 12:03:07.253563 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:03:07.253682 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:07.256195 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:03:07.259038 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:03:07.262476 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:07.262595 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:03:07.268170 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:03:07.271411 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:03:07.271489 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:07.275144 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:03:07.275226 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:07.279536 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:03:07.279620 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:03:07.281508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:03:07.281594 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:07.285068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:03:07.285181 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:07.289025 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:03:07.292580 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:03:07.295171 systemd-networkd[1196]: eth0: DHCPv6 lease lost Jan 29 12:03:07.324062 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:03:07.324326 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:03:07.329540 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:03:07.329743 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:07.348284 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:03:07.353457 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:03:07.353578 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:07.356263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:07.359189 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:03:07.359530 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:03:07.385884 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:03:07.386193 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:07.405920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:03:07.406668 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:07.409339 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:03:07.409412 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:07.412590 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:03:07.413304 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:07.417690 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:03:07.417883 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:07.431969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:07.432069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:07.436470 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:03:07.449252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:03:07.449391 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:07.449591 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:03:07.449677 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:07.450283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:03:07.450360 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:07.450965 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:03:07.451036 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:07.457211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:07.457319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:07.476772 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:03:07.478175 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:03:07.497795 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:03:07.498240 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:03:07.505838 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:03:07.525591 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:03:07.544333 systemd[1]: Switching root. Jan 29 12:03:07.575301 systemd-journald[251]: Journal stopped Jan 29 12:03:13.925084 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 29 12:03:13.925272 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:03:13.925321 kernel: SELinux: policy capability open_perms=1 Jan 29 12:03:13.925355 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:03:13.925391 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:03:13.925428 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:03:13.925464 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:03:13.925495 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:03:13.925528 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:03:13.925568 kernel: audit: type=1403 audit(1738152189.963:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:03:13.925618 systemd[1]: Successfully loaded SELinux policy in 172.534ms. Jan 29 12:03:13.925680 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.313ms. Jan 29 12:03:13.925724 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:13.925762 systemd[1]: Detected virtualization amazon. Jan 29 12:03:13.925798 systemd[1]: Detected architecture arm64. Jan 29 12:03:13.925829 systemd[1]: Detected first boot. Jan 29 12:03:13.925864 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:03:13.925900 zram_generator::config[1483]: No configuration found. Jan 29 12:03:13.925947 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:03:13.925983 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:03:13.926018 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:03:13.926051 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:03:13.926098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:03:13.926212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:03:13.926247 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:03:13.926280 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:03:13.926317 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:03:13.926351 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:03:13.926383 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:03:13.926415 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:03:13.926449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:13.926480 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:13.926513 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:03:13.926546 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:03:13.926582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:03:13.926619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:13.926654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:03:13.926686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:13.926718 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:03:13.929263 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:03:13.929321 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:13.929353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:03:13.929394 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:13.929427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:13.929461 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:13.929493 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:13.929523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:03:13.929553 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:03:13.929586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:13.929616 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:13.929647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:13.929682 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:03:13.929719 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:03:13.929752 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:03:13.929784 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:03:13.929819 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:03:13.929849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:03:13.929880 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:03:13.929915 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:03:13.929951 systemd[1]: Reached target machines.target - Containers. Jan 29 12:03:13.929988 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:03:13.930021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:13.930056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:13.930095 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:03:13.932910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:13.932964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:03:13.932998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:13.933029 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:03:13.933060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:13.933100 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:03:13.933539 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:03:13.933645 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:03:13.933684 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:03:13.933717 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:03:13.933752 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:13.933784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:13.933820 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:03:13.933862 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:03:13.933895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:13.933931 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:03:13.933962 systemd[1]: Stopped verity-setup.service. Jan 29 12:03:13.933992 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:03:13.934023 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:03:13.934058 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:03:13.934093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:03:13.934255 systemd-journald[1561]: Collecting audit messages is disabled. Jan 29 12:03:13.934338 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:03:13.934369 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:03:13.934400 systemd-journald[1561]: Journal started Jan 29 12:03:13.934453 systemd-journald[1561]: Runtime Journal (/run/log/journal/ec2122cf94b5ee39c329140534f6581a) is 8.0M, max 75.3M, 67.3M free. Jan 29 12:03:12.845477 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:03:13.300693 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 12:03:13.301479 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:03:13.940477 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:13.947093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:13.951432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:13.951785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:13.954978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:13.955265 kernel: loop: module loaded Jan 29 12:03:13.955402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:13.958465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:13.958880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:13.962268 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:03:13.974897 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:03:13.996809 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:03:14.001527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:03:14.001617 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:14.007713 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:03:14.019627 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:03:14.026503 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:03:14.029541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:14.049668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:03:14.057615 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:03:14.062454 kernel: ACPI: bus type drm_connector registered Jan 29 12:03:14.061766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:03:14.071774 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:03:14.074073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:03:14.077472 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:03:14.081247 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:03:14.083253 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:03:14.120240 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:03:14.124181 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:03:14.127950 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:03:14.151658 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:03:14.172495 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:03:14.176746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:03:14.180157 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:03:14.183567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:14.186799 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:03:14.218813 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:03:14.229961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:14.235678 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:03:14.238580 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:03:14.244801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:03:14.262171 kernel: fuse: init (API version 7.39) Jan 29 12:03:14.264675 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:03:14.267259 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:03:14.278321 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:03:14.310940 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:03:14.344303 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:03:14.358467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:14.431177 kernel: loop0: detected capacity change from 0 to 114432 Jan 29 12:03:14.444261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:14.453293 systemd-journald[1561]: Time spent on flushing to /var/log/journal/ec2122cf94b5ee39c329140534f6581a is 48.541ms for 917 entries. Jan 29 12:03:14.453293 systemd-journald[1561]: System Journal (/var/log/journal/ec2122cf94b5ee39c329140534f6581a) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:03:14.519551 systemd-journald[1561]: Received client request to flush runtime journal. Jan 29 12:03:14.447881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:14.462365 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:03:14.483042 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:03:14.486950 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Jan 29 12:03:14.486975 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Jan 29 12:03:14.504255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:14.523168 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:03:15.185246 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:03:15.206187 kernel: loop1: detected capacity change from 0 to 52536 Jan 29 12:03:15.250185 kernel: loop2: detected capacity change from 0 to 189592 Jan 29 12:03:15.428226 kernel: loop3: detected capacity change from 0 to 114328 Jan 29 12:03:16.031148 kernel: loop4: detected capacity change from 0 to 114432 Jan 29 12:03:16.055180 kernel: loop5: detected capacity change from 0 to 52536 Jan 29 12:03:16.076162 kernel: loop6: detected capacity change from 0 to 189592 Jan 29 12:03:16.082762 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:03:16.093514 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:16.109157 kernel: loop7: detected capacity change from 0 to 114328 Jan 29 12:03:16.126186 (sd-merge)[1639]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 12:03:16.127407 (sd-merge)[1639]: Merged extensions into '/usr'. Jan 29 12:03:16.136799 systemd[1]: Reloading requested from client PID 1597 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:03:16.137039 systemd[1]: Reloading... Jan 29 12:03:16.163858 systemd-udevd[1641]: Using default interface naming scheme 'v255'. Jan 29 12:03:16.280168 zram_generator::config[1670]: No configuration found. Jan 29 12:03:16.651941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:16.740430 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:03:16.862604 systemd[1]: Reloading finished in 724 ms. Jan 29 12:03:16.913016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:16.916680 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:03:16.935586 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:03:16.952541 systemd[1]: Starting ensure-sysext.service... Jan 29 12:03:16.960480 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:16.966910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:16.982550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:16.991916 systemd[1]: Reloading requested from client PID 1747 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:03:16.991938 systemd[1]: Reloading... Jan 29 12:03:17.064848 systemd-tmpfiles[1749]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:03:17.066407 systemd-tmpfiles[1749]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:03:17.071155 systemd-tmpfiles[1749]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:03:17.071908 systemd-tmpfiles[1749]: ACLs are not supported, ignoring. Jan 29 12:03:17.072077 systemd-tmpfiles[1749]: ACLs are not supported, ignoring. Jan 29 12:03:17.083814 systemd-tmpfiles[1749]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:03:17.083861 systemd-tmpfiles[1749]: Skipping /boot Jan 29 12:03:17.129702 systemd-tmpfiles[1749]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:03:17.132331 systemd-tmpfiles[1749]: Skipping /boot Jan 29 12:03:17.197172 zram_generator::config[1779]: No configuration found. Jan 29 12:03:17.468007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:17.602179 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1734) Jan 29 12:03:17.621198 systemd[1]: Reloading finished in 628 ms. Jan 29 12:03:17.693709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:17.697303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:17.743469 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:17.758255 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:03:17.770586 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:03:17.786756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:17.797935 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:03:17.809498 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:03:17.827707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:17.868932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:17.880407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:17.888377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:17.892505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:17.898647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:17.899091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:17.954160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 12:03:17.962058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:17.963278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:17.986568 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:17.987026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:18.017447 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:03:18.023944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:18.036447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:18.052789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:03:18.062243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:18.073747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:18.075991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:18.080555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:03:18.083513 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:03:18.106192 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:03:18.140192 systemd[1]: Finished ensure-sysext.service. Jan 29 12:03:18.179631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:18.180051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:18.187022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:18.187472 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:18.191932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:03:18.198398 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:03:18.203294 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:03:18.206476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:18.206949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:18.209267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:03:18.227294 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:03:18.227967 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:03:18.245376 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:03:18.249205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:03:18.261405 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:03:18.293304 augenrules[1969]: No rules Jan 29 12:03:18.296051 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:18.323360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:03:18.328329 lvm[1961]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:03:18.394315 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:03:18.397937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:18.411033 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:03:18.424396 systemd-networkd[1748]: lo: Link UP Jan 29 12:03:18.424412 systemd-networkd[1748]: lo: Gained carrier Jan 29 12:03:18.427526 systemd-networkd[1748]: Enumeration completed Jan 29 12:03:18.427682 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:18.433353 systemd-networkd[1748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:18.433371 systemd-networkd[1748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:18.435914 systemd-networkd[1748]: eth0: Link UP Jan 29 12:03:18.436443 systemd-networkd[1748]: eth0: Gained carrier Jan 29 12:03:18.436484 systemd-networkd[1748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:18.437429 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:03:18.446322 systemd-networkd[1748]: eth0: DHCPv4 address 172.31.16.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 12:03:18.453878 lvm[1979]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:03:18.485765 systemd-resolved[1909]: Positive Trust Anchors: Jan 29 12:03:18.485802 systemd-resolved[1909]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:18.485867 systemd-resolved[1909]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:18.503953 systemd-resolved[1909]: Defaulting to hostname 'linux'. Jan 29 12:03:18.508070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:18.510357 systemd[1]: Reached target network.target - Network. Jan 29 12:03:18.512038 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:18.518946 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:03:19.624259 systemd-networkd[1748]: eth0: Gained IPv6LL Jan 29 12:03:19.628903 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:03:19.632681 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:03:20.720290 ldconfig[1586]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:03:20.728699 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:03:20.745433 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:03:20.766586 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:03:20.769792 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:20.772346 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:03:20.775306 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:03:20.779098 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:03:20.781771 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:03:20.784201 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:03:20.787005 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:03:20.787069 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:20.788835 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:20.793046 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:03:20.798784 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:03:20.807356 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:03:20.810736 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:03:20.813140 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:20.815065 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:20.817012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:03:20.817087 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:03:20.826320 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:03:20.835891 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:03:20.841531 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:03:20.847461 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:03:20.860502 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:03:20.862606 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:03:20.867381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:20.884672 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:03:20.901523 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 12:03:20.909271 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:03:20.916502 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:03:20.928384 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 12:03:20.934488 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:03:20.950487 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:03:20.977522 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:03:20.981340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:03:20.983785 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:03:20.987478 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:03:20.995375 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:03:21.020780 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:03:21.022603 jq[1992]: false Jan 29 12:03:21.023322 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:03:21.029630 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:03:21.031395 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:03:21.059477 ntpd[1998]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: ---------------------------------------------------- Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: corporation. Support and training for ntp-4 are Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: available at https://www.nwtime.org/support Jan 29 12:03:21.064719 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: ---------------------------------------------------- Jan 29 12:03:21.059543 ntpd[1998]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:03:21.059563 ntpd[1998]: ---------------------------------------------------- Jan 29 12:03:21.059583 ntpd[1998]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:03:21.059602 ntpd[1998]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:03:21.059622 ntpd[1998]: corporation. Support and training for ntp-4 are Jan 29 12:03:21.059641 ntpd[1998]: available at https://www.nwtime.org/support Jan 29 12:03:21.059660 ntpd[1998]: ---------------------------------------------------- Jan 29 12:03:21.086985 ntpd[1998]: proto: precision = 0.108 usec (-23) Jan 29 12:03:21.092455 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: proto: precision = 0.108 usec (-23) Jan 29 12:03:21.092455 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: basedate set to 2025-01-17 Jan 29 12:03:21.092455 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: gps base set to 2025-01-19 (week 2350) Jan 29 12:03:21.087545 ntpd[1998]: basedate set to 2025-01-17 Jan 29 12:03:21.087579 ntpd[1998]: gps base set to 2025-01-19 (week 2350) Jan 29 12:03:21.104791 ntpd[1998]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:03:21.108959 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:03:21.114226 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:03:21.114226 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:03:21.114226 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listen normally on 3 eth0 172.31.16.93:123 Jan 29 12:03:21.114226 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listen normally on 4 lo [::1]:123 Jan 29 12:03:21.114226 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listen normally on 5 eth0 [fe80::428:f3ff:fe22:e939%2]:123 Jan 29 12:03:21.114226 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: Listening on routing socket on fd #22 for interface updates Jan 29 12:03:21.111298 ntpd[1998]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:03:21.111706 ntpd[1998]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:03:21.111831 ntpd[1998]: Listen normally on 3 eth0 172.31.16.93:123 Jan 29 12:03:21.111933 ntpd[1998]: Listen normally on 4 lo [::1]:123 Jan 29 12:03:21.112023 ntpd[1998]: Listen normally on 5 eth0 [fe80::428:f3ff:fe22:e939%2]:123 Jan 29 12:03:21.112090 ntpd[1998]: Listening on routing socket on fd #22 for interface updates Jan 29 12:03:21.135659 dbus-daemon[1991]: [system] SELinux support is enabled Jan 29 12:03:21.144853 jq[2007]: true Jan 29 12:03:21.136051 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:03:21.150236 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:03:21.165102 dbus-daemon[1991]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1748 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 12:03:21.150338 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:03:21.154030 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:03:21.154098 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:03:21.180219 (ntainerd)[2030]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:03:21.193273 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 12:03:21.202840 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:21.202840 ntpd[1998]: 29 Jan 12:03:21 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:21.201337 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:21.201395 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:21.243693 tar[2021]: linux-arm64/helm Jan 29 12:03:21.273641 extend-filesystems[1993]: Found loop4 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found loop5 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found loop6 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found loop7 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p1 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p2 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p3 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found usr Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p4 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p6 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p7 Jan 29 12:03:21.273641 extend-filesystems[1993]: Found nvme0n1p9 Jan 29 12:03:21.356960 extend-filesystems[1993]: Checking size of /dev/nvme0n1p9 Jan 29 12:03:21.316490 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:03:21.319440 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:03:21.355537 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:03:21.370204 jq[2029]: true Jan 29 12:03:21.406206 update_engine[2006]: I20250129 12:03:21.400812 2006 main.cc:92] Flatcar Update Engine starting Jan 29 12:03:21.417086 coreos-metadata[1990]: Jan 29 12:03:21.413 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:03:21.421893 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:03:21.436838 update_engine[2006]: I20250129 12:03:21.427731 2006 update_check_scheduler.cc:74] Next update check in 10m33s Jan 29 12:03:21.436906 coreos-metadata[1990]: Jan 29 12:03:21.422 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 12:03:21.429598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:03:21.449498 extend-filesystems[1993]: Resized partition /dev/nvme0n1p9 Jan 29 12:03:21.453950 coreos-metadata[1990]: Jan 29 12:03:21.451 INFO Fetch successful Jan 29 12:03:21.453950 coreos-metadata[1990]: Jan 29 12:03:21.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 12:03:21.458444 coreos-metadata[1990]: Jan 29 12:03:21.457 INFO Fetch successful Jan 29 12:03:21.458444 coreos-metadata[1990]: Jan 29 12:03:21.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 12:03:21.459137 coreos-metadata[1990]: Jan 29 12:03:21.458 INFO Fetch successful Jan 29 12:03:21.459137 coreos-metadata[1990]: Jan 29 12:03:21.458 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 12:03:21.475195 extend-filesystems[2054]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:03:21.481961 coreos-metadata[1990]: Jan 29 12:03:21.477 INFO Fetch successful Jan 29 12:03:21.481961 coreos-metadata[1990]: Jan 29 12:03:21.477 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 12:03:21.481961 coreos-metadata[1990]: Jan 29 12:03:21.481 INFO Fetch failed with 404: resource not found Jan 29 12:03:21.481961 coreos-metadata[1990]: Jan 29 12:03:21.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 12:03:21.498754 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 12:03:21.498965 coreos-metadata[1990]: Jan 29 12:03:21.491 INFO Fetch successful Jan 29 12:03:21.498965 coreos-metadata[1990]: Jan 29 12:03:21.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 12:03:21.498965 coreos-metadata[1990]: Jan 29 12:03:21.492 INFO Fetch successful Jan 29 12:03:21.498965 coreos-metadata[1990]: Jan 29 12:03:21.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 12:03:21.500820 coreos-metadata[1990]: Jan 29 12:03:21.500 INFO Fetch successful Jan 29 12:03:21.500820 coreos-metadata[1990]: Jan 29 12:03:21.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 12:03:21.512318 coreos-metadata[1990]: Jan 29 12:03:21.506 INFO Fetch successful Jan 29 12:03:21.512318 coreos-metadata[1990]: Jan 29 12:03:21.506 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 12:03:21.517265 coreos-metadata[1990]: Jan 29 12:03:21.516 INFO Fetch successful Jan 29 12:03:21.558964 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 12:03:21.571431 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 12:03:21.621181 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:03:21.624024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:03:21.672159 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 12:03:21.688610 extend-filesystems[2054]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 12:03:21.688610 extend-filesystems[2054]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:03:21.688610 extend-filesystems[2054]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 12:03:21.700024 extend-filesystems[1993]: Resized filesystem in /dev/nvme0n1p9 Jan 29 12:03:21.702789 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:03:21.703982 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:03:21.721234 systemd-logind[2005]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 12:03:21.721287 systemd-logind[2005]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 12:03:21.727781 systemd-logind[2005]: New seat seat0. Jan 29 12:03:21.740340 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:03:21.757169 bash[2082]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:03:21.765919 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:03:21.777763 systemd[1]: Starting sshkeys.service... Jan 29 12:03:21.832681 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:03:21.850098 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:03:21.878672 amazon-ssm-agent[2066]: Initializing new seelog logger Jan 29 12:03:21.878672 amazon-ssm-agent[2066]: New Seelog Logger Creation Complete Jan 29 12:03:21.878672 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.878672 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 processing appconfig overrides Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 processing appconfig overrides Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.885261 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 processing appconfig overrides Jan 29 12:03:21.902267 amazon-ssm-agent[2066]: 2025-01-29 12:03:21 INFO Proxy environment variables: Jan 29 12:03:21.914691 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.914691 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 12:03:21.914691 amazon-ssm-agent[2066]: 2025/01/29 12:03:21 processing appconfig overrides Jan 29 12:03:22.012826 amazon-ssm-agent[2066]: 2025-01-29 12:03:21 INFO no_proxy: Jan 29 12:03:22.064158 locksmithd[2053]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:03:22.114425 amazon-ssm-agent[2066]: 2025-01-29 12:03:21 INFO https_proxy: Jan 29 12:03:22.163898 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 12:03:22.164290 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 12:03:22.172485 dbus-daemon[1991]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2035 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 12:03:22.206926 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 12:03:22.213235 amazon-ssm-agent[2066]: 2025-01-29 12:03:21 INFO http_proxy: Jan 29 12:03:22.245684 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2107) Jan 29 12:03:22.297760 polkitd[2118]: Started polkitd version 121 Jan 29 12:03:22.312458 amazon-ssm-agent[2066]: 2025-01-29 12:03:21 INFO Checking if agent identity type OnPrem can be assumed Jan 29 12:03:22.348933 polkitd[2118]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 12:03:22.351415 polkitd[2118]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 12:03:22.359724 polkitd[2118]: Finished loading, compiling and executing 2 rules Jan 29 12:03:22.372392 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 12:03:22.372732 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 12:03:22.378034 polkitd[2118]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 12:03:22.382284 containerd[2030]: time="2025-01-29T12:03:22.379729126Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:03:22.413277 amazon-ssm-agent[2066]: 2025-01-29 12:03:21 INFO Checking if agent identity type EC2 can be assumed Jan 29 12:03:22.457939 coreos-metadata[2093]: Jan 29 12:03:22.456 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 12:03:22.468651 coreos-metadata[2093]: Jan 29 12:03:22.468 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 12:03:22.465811 systemd-hostnamed[2035]: Hostname set to (transient) Jan 29 12:03:22.466000 systemd-resolved[1909]: System hostname changed to 'ip-172-31-16-93'. Jan 29 12:03:22.474875 coreos-metadata[2093]: Jan 29 12:03:22.472 INFO Fetch successful Jan 29 12:03:22.474875 coreos-metadata[2093]: Jan 29 12:03:22.472 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:03:22.474875 coreos-metadata[2093]: Jan 29 12:03:22.473 INFO Fetch successful Jan 29 12:03:22.482071 unknown[2093]: wrote ssh authorized keys file for user: core Jan 29 12:03:22.512521 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO Agent will take identity from EC2 Jan 29 12:03:22.549223 containerd[2030]: time="2025-01-29T12:03:22.547161167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.555095 containerd[2030]: time="2025-01-29T12:03:22.555012359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:22.555095 containerd[2030]: time="2025-01-29T12:03:22.555084995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:03:22.555269 containerd[2030]: time="2025-01-29T12:03:22.555191411Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:03:22.555705 containerd[2030]: time="2025-01-29T12:03:22.555506699Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:03:22.555705 containerd[2030]: time="2025-01-29T12:03:22.555561599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.555881 containerd[2030]: time="2025-01-29T12:03:22.555700007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:22.555881 containerd[2030]: time="2025-01-29T12:03:22.555730043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.559573 containerd[2030]: time="2025-01-29T12:03:22.558927575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:22.559688 containerd[2030]: time="2025-01-29T12:03:22.559563227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.559688 containerd[2030]: time="2025-01-29T12:03:22.559629923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:22.559837 containerd[2030]: time="2025-01-29T12:03:22.559686371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.561743 containerd[2030]: time="2025-01-29T12:03:22.561271067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.563910 containerd[2030]: time="2025-01-29T12:03:22.563593403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:22.569871 containerd[2030]: time="2025-01-29T12:03:22.565281767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:22.569871 containerd[2030]: time="2025-01-29T12:03:22.565341803Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:03:22.569871 containerd[2030]: time="2025-01-29T12:03:22.566883899Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:03:22.569871 containerd[2030]: time="2025-01-29T12:03:22.567052211Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:03:22.570494 update-ssh-keys[2148]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:03:22.573886 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:03:22.580275 systemd[1]: Finished sshkeys.service. Jan 29 12:03:22.585666 containerd[2030]: time="2025-01-29T12:03:22.585600119Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:03:22.587469 containerd[2030]: time="2025-01-29T12:03:22.586188047Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:03:22.587469 containerd[2030]: time="2025-01-29T12:03:22.586257239Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:03:22.587469 containerd[2030]: time="2025-01-29T12:03:22.586295627Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:03:22.587469 containerd[2030]: time="2025-01-29T12:03:22.586331027Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:03:22.587469 containerd[2030]: time="2025-01-29T12:03:22.586593455Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591303983Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591576659Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591614219Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591648731Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591685991Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591718031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591748127Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591780071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591862979Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591894707Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591927263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591956519Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.591997463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.593147 containerd[2030]: time="2025-01-29T12:03:22.592028531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.593831 containerd[2030]: time="2025-01-29T12:03:22.592057571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596028815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596161727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596213231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596246951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596299763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596345783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596419427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596470103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596513591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596554391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596602835Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596666255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596707295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599067 containerd[2030]: time="2025-01-29T12:03:22.596738279Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.596874107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.596925419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.596966687Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.597014579Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.597041999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.597085739Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.598756715Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:03:22.599758 containerd[2030]: time="2025-01-29T12:03:22.598843451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:03:22.611291 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.607764491Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.607931663Z" level=info msg="Connect containerd service" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.608015723Z" level=info msg="using legacy CRI server" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.608035307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.608258423Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.609485711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610032947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610153979Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610240583Z" level=info msg="Start subscribing containerd event" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610296587Z" level=info msg="Start recovering state" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610418219Z" level=info msg="Start event monitor" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610445687Z" level=info msg="Start snapshots syncer" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610468871Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610506755Z" level=info msg="Start streaming server" Jan 29 12:03:22.612639 containerd[2030]: time="2025-01-29T12:03:22.610656251Z" level=info msg="containerd successfully booted in 0.234871s" Jan 29 12:03:22.621815 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:03:22.721143 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:03:22.825273 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 12:03:22.923734 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 12:03:23.028163 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 12:03:23.131737 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 12:03:23.237547 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 12:03:23.336239 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [Registrar] Starting registrar module Jan 29 12:03:23.441256 amazon-ssm-agent[2066]: 2025-01-29 12:03:22 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 12:03:23.796800 tar[2021]: linux-arm64/LICENSE Jan 29 12:03:23.801504 tar[2021]: linux-arm64/README.md Jan 29 12:03:23.844275 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:03:23.862482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:23.881634 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:24.293351 amazon-ssm-agent[2066]: 2025-01-29 12:03:24 INFO [EC2Identity] EC2 registration was successful. Jan 29 12:03:24.297826 sshd_keygen[2027]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:03:24.343322 amazon-ssm-agent[2066]: 2025-01-29 12:03:24 INFO [CredentialRefresher] credentialRefresher has started Jan 29 12:03:24.343322 amazon-ssm-agent[2066]: 2025-01-29 12:03:24 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 12:03:24.343322 amazon-ssm-agent[2066]: 2025-01-29 12:03:24 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 12:03:24.347101 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:03:24.364984 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:03:24.389817 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:03:24.390368 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:03:24.395295 amazon-ssm-agent[2066]: 2025-01-29 12:03:24 INFO [CredentialRefresher] Next credential rotation will be in 32.491657806666666 minutes Jan 29 12:03:24.404475 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:03:24.438061 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:03:24.449236 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:03:24.458731 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:03:24.461748 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:03:24.463992 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:03:24.467181 systemd[1]: Startup finished in 1.279s (kernel) + 15.032s (initrd) + 14.673s (userspace) = 30.985s. Jan 29 12:03:24.883441 kubelet[2235]: E0129 12:03:24.883356 2235 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:24.887355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:24.888180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:24.890081 systemd[1]: kubelet.service: Consumed 1.336s CPU time. Jan 29 12:03:25.372888 amazon-ssm-agent[2066]: 2025-01-29 12:03:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 12:03:25.473332 amazon-ssm-agent[2066]: 2025-01-29 12:03:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2264) started Jan 29 12:03:25.574275 amazon-ssm-agent[2066]: 2025-01-29 12:03:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 12:03:28.919093 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:03:28.929746 systemd[1]: Started sshd@0-172.31.16.93:22-139.178.89.65:59398.service - OpenSSH per-connection server daemon (139.178.89.65:59398). Jan 29 12:03:29.112506 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 59398 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:29.117156 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:29.134720 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:03:29.143643 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:03:29.151228 systemd-logind[2005]: New session 1 of user core. Jan 29 12:03:29.169866 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:03:29.176774 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:03:29.196160 (systemd)[2279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:03:29.437642 systemd[2279]: Queued start job for default target default.target. Jan 29 12:03:29.451818 systemd[2279]: Created slice app.slice - User Application Slice. Jan 29 12:03:29.452049 systemd[2279]: Reached target paths.target - Paths. Jan 29 12:03:29.452087 systemd[2279]: Reached target timers.target - Timers. Jan 29 12:03:29.455706 systemd[2279]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:03:29.488369 systemd[2279]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:03:29.489092 systemd[2279]: Reached target sockets.target - Sockets. Jan 29 12:03:29.489438 systemd[2279]: Reached target basic.target - Basic System. Jan 29 12:03:29.489684 systemd[2279]: Reached target default.target - Main User Target. Jan 29 12:03:29.489760 systemd[2279]: Startup finished in 280ms. Jan 29 12:03:29.489947 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:03:29.501457 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:03:29.661723 systemd[1]: Started sshd@1-172.31.16.93:22-139.178.89.65:59414.service - OpenSSH per-connection server daemon (139.178.89.65:59414). Jan 29 12:03:29.843885 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 59414 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:29.846579 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:29.856569 systemd-logind[2005]: New session 2 of user core. Jan 29 12:03:29.866439 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:03:29.995260 sshd[2290]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:30.000813 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:03:30.002524 systemd[1]: sshd@1-172.31.16.93:22-139.178.89.65:59414.service: Deactivated successfully. Jan 29 12:03:30.010815 systemd-logind[2005]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:03:30.014566 systemd-logind[2005]: Removed session 2. Jan 29 12:03:30.045607 systemd[1]: Started sshd@2-172.31.16.93:22-139.178.89.65:59424.service - OpenSSH per-connection server daemon (139.178.89.65:59424). Jan 29 12:03:30.213887 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 59424 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:30.216885 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:30.225987 systemd-logind[2005]: New session 3 of user core. Jan 29 12:03:30.235428 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:03:30.357066 sshd[2297]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:30.364592 systemd[1]: sshd@2-172.31.16.93:22-139.178.89.65:59424.service: Deactivated successfully. Jan 29 12:03:30.369009 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:03:30.373531 systemd-logind[2005]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:03:30.392387 systemd-logind[2005]: Removed session 3. Jan 29 12:03:30.402043 systemd[1]: Started sshd@3-172.31.16.93:22-139.178.89.65:59440.service - OpenSSH per-connection server daemon (139.178.89.65:59440). Jan 29 12:03:30.579630 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 59440 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:30.582639 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:30.592912 systemd-logind[2005]: New session 4 of user core. Jan 29 12:03:30.599507 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:03:30.730788 sshd[2304]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:30.737613 systemd[1]: sshd@3-172.31.16.93:22-139.178.89.65:59440.service: Deactivated successfully. Jan 29 12:03:30.742389 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:03:30.743746 systemd-logind[2005]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:03:30.747870 systemd-logind[2005]: Removed session 4. Jan 29 12:03:30.775730 systemd[1]: Started sshd@4-172.31.16.93:22-139.178.89.65:59442.service - OpenSSH per-connection server daemon (139.178.89.65:59442). Jan 29 12:03:30.946913 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 59442 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:30.950427 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:30.961549 systemd-logind[2005]: New session 5 of user core. Jan 29 12:03:30.969429 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:03:31.092638 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:03:31.094060 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:31.112715 sudo[2314]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:31.136699 sshd[2311]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:31.142944 systemd-logind[2005]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:03:31.143514 systemd[1]: sshd@4-172.31.16.93:22-139.178.89.65:59442.service: Deactivated successfully. Jan 29 12:03:31.147741 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:03:31.151943 systemd-logind[2005]: Removed session 5. Jan 29 12:03:31.174460 systemd[1]: Started sshd@5-172.31.16.93:22-139.178.89.65:39984.service - OpenSSH per-connection server daemon (139.178.89.65:39984). Jan 29 12:03:31.361445 sshd[2319]: Accepted publickey for core from 139.178.89.65 port 39984 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:31.364642 sshd[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:31.374179 systemd-logind[2005]: New session 6 of user core. Jan 29 12:03:31.383009 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:03:31.491397 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:03:31.492252 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:31.500845 sudo[2323]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:31.514631 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:03:31.516102 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:31.545733 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:31.551391 auditctl[2326]: No rules Jan 29 12:03:31.552209 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:03:31.552729 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:31.565242 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:31.613957 augenrules[2344]: No rules Jan 29 12:03:31.618239 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:31.620996 sudo[2322]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:31.645225 sshd[2319]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:31.650491 systemd[1]: sshd@5-172.31.16.93:22-139.178.89.65:39984.service: Deactivated successfully. Jan 29 12:03:31.653782 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:03:31.656914 systemd-logind[2005]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:03:31.660043 systemd-logind[2005]: Removed session 6. Jan 29 12:03:31.692838 systemd[1]: Started sshd@6-172.31.16.93:22-139.178.89.65:39986.service - OpenSSH per-connection server daemon (139.178.89.65:39986). Jan 29 12:03:31.855819 sshd[2352]: Accepted publickey for core from 139.178.89.65 port 39986 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:03:31.858413 sshd[2352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:31.867452 systemd-logind[2005]: New session 7 of user core. Jan 29 12:03:31.876398 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:03:31.982495 sudo[2355]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:03:31.983378 sudo[2355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:32.468684 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:03:32.468961 (dockerd)[2371]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:03:32.842755 dockerd[2371]: time="2025-01-29T12:03:32.842252939Z" level=info msg="Starting up" Jan 29 12:03:33.001898 dockerd[2371]: time="2025-01-29T12:03:33.001497445Z" level=info msg="Loading containers: start." Jan 29 12:03:33.178176 kernel: Initializing XFRM netlink socket Jan 29 12:03:33.213401 (udev-worker)[2395]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:03:33.309227 systemd-networkd[1748]: docker0: Link UP Jan 29 12:03:33.339098 dockerd[2371]: time="2025-01-29T12:03:33.338982627Z" level=info msg="Loading containers: done." Jan 29 12:03:33.367622 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3202609363-merged.mount: Deactivated successfully. Jan 29 12:03:33.373168 dockerd[2371]: time="2025-01-29T12:03:33.372326427Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:03:33.373168 dockerd[2371]: time="2025-01-29T12:03:33.372525431Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:03:33.373168 dockerd[2371]: time="2025-01-29T12:03:33.372784826Z" level=info msg="Daemon has completed initialization" Jan 29 12:03:33.442210 dockerd[2371]: time="2025-01-29T12:03:33.441950374Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:03:33.443093 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:03:34.518002 containerd[2030]: time="2025-01-29T12:03:34.517943487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 12:03:34.907008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:03:34.915739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:35.295808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148361882.mount: Deactivated successfully. Jan 29 12:03:35.310573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:35.326220 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:35.475170 kubelet[2525]: E0129 12:03:35.473452 2525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:35.481209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:35.481574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:37.309012 containerd[2030]: time="2025-01-29T12:03:37.308931619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:37.311377 containerd[2030]: time="2025-01-29T12:03:37.311309658Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 29 12:03:37.312175 containerd[2030]: time="2025-01-29T12:03:37.311818887Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:37.320036 containerd[2030]: time="2025-01-29T12:03:37.319922273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:37.323422 containerd[2030]: time="2025-01-29T12:03:37.322589223Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.804576986s" Jan 29 12:03:37.323422 containerd[2030]: time="2025-01-29T12:03:37.322667352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 12:03:37.324703 containerd[2030]: time="2025-01-29T12:03:37.324624089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 12:03:39.724673 containerd[2030]: time="2025-01-29T12:03:39.724585295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:39.727253 containerd[2030]: time="2025-01-29T12:03:39.726587010Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 29 12:03:39.729409 containerd[2030]: time="2025-01-29T12:03:39.729283177Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:39.739697 containerd[2030]: time="2025-01-29T12:03:39.739594707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:39.743827 containerd[2030]: time="2025-01-29T12:03:39.742243006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.417449538s" Jan 29 12:03:39.743827 containerd[2030]: time="2025-01-29T12:03:39.742314922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 12:03:39.743827 containerd[2030]: time="2025-01-29T12:03:39.743448031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 12:03:41.634625 containerd[2030]: time="2025-01-29T12:03:41.634559600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:41.636888 containerd[2030]: time="2025-01-29T12:03:41.636829621Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 29 12:03:41.638451 containerd[2030]: time="2025-01-29T12:03:41.638378094Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:41.645812 containerd[2030]: time="2025-01-29T12:03:41.645736065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:41.650499 containerd[2030]: time="2025-01-29T12:03:41.650413653Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.906900926s" Jan 29 12:03:41.650499 containerd[2030]: time="2025-01-29T12:03:41.650489623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 12:03:41.651467 containerd[2030]: time="2025-01-29T12:03:41.651405101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 12:03:43.000984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3499683064.mount: Deactivated successfully. Jan 29 12:03:43.557798 containerd[2030]: time="2025-01-29T12:03:43.557714292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:43.559228 containerd[2030]: time="2025-01-29T12:03:43.559153092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 29 12:03:43.560308 containerd[2030]: time="2025-01-29T12:03:43.560232588Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:43.564478 containerd[2030]: time="2025-01-29T12:03:43.564331130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:43.565808 containerd[2030]: time="2025-01-29T12:03:43.565749120Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.914281362s" Jan 29 12:03:43.566104 containerd[2030]: time="2025-01-29T12:03:43.565937102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 12:03:43.566943 containerd[2030]: time="2025-01-29T12:03:43.566664059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:03:44.137652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045287162.mount: Deactivated successfully. Jan 29 12:03:45.292262 containerd[2030]: time="2025-01-29T12:03:45.292165257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.295025 containerd[2030]: time="2025-01-29T12:03:45.294954102Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 29 12:03:45.295830 containerd[2030]: time="2025-01-29T12:03:45.295702204Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.308157 containerd[2030]: time="2025-01-29T12:03:45.305733338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.314788 containerd[2030]: time="2025-01-29T12:03:45.314671471Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.747941805s" Jan 29 12:03:45.315147 containerd[2030]: time="2025-01-29T12:03:45.315071699Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 12:03:45.316247 containerd[2030]: time="2025-01-29T12:03:45.316187992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:03:45.657384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:03:45.664511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:45.909745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686694706.mount: Deactivated successfully. Jan 29 12:03:45.924621 containerd[2030]: time="2025-01-29T12:03:45.924534361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.927840 containerd[2030]: time="2025-01-29T12:03:45.927784267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 29 12:03:45.929401 containerd[2030]: time="2025-01-29T12:03:45.929289766Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.939148 containerd[2030]: time="2025-01-29T12:03:45.936758358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:45.940204 containerd[2030]: time="2025-01-29T12:03:45.940141661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 623.831042ms" Jan 29 12:03:45.940370 containerd[2030]: time="2025-01-29T12:03:45.940206381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 12:03:45.941843 containerd[2030]: time="2025-01-29T12:03:45.941631844Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 12:03:45.990561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:45.993506 (kubelet)[2653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:46.083294 kubelet[2653]: E0129 12:03:46.083153 2653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:46.087852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:46.088276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:46.539664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164431190.mount: Deactivated successfully. Jan 29 12:03:50.083842 containerd[2030]: time="2025-01-29T12:03:50.083724026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.086029 containerd[2030]: time="2025-01-29T12:03:50.085955426Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 29 12:03:50.087077 containerd[2030]: time="2025-01-29T12:03:50.086990088Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.093632 containerd[2030]: time="2025-01-29T12:03:50.093524827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:50.096431 containerd[2030]: time="2025-01-29T12:03:50.096340167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.154579435s" Jan 29 12:03:50.096431 containerd[2030]: time="2025-01-29T12:03:50.096417576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 12:03:52.500008 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 12:03:56.156990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:03:56.169336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:56.523475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:56.537464 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:56.632844 kubelet[2745]: E0129 12:03:56.632755 2745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:56.640974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:56.642709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:57.692510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:57.703726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:57.765542 systemd[1]: Reloading requested from client PID 2760 ('systemctl') (unit session-7.scope)... Jan 29 12:03:57.765787 systemd[1]: Reloading... Jan 29 12:03:58.034576 zram_generator::config[2800]: No configuration found. Jan 29 12:03:58.293667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:58.471601 systemd[1]: Reloading finished in 704 ms. Jan 29 12:03:58.579653 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:03:58.579876 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:03:58.582213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:58.593676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:58.880510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:58.890691 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:03:58.971620 kubelet[2864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:58.971620 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:03:58.971620 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:58.972255 kubelet[2864]: I0129 12:03:58.971788 2864 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:00.880170 kubelet[2864]: I0129 12:04:00.879201 2864 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:04:00.880170 kubelet[2864]: I0129 12:04:00.879260 2864 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:00.881077 kubelet[2864]: I0129 12:04:00.881036 2864 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:04:00.926033 kubelet[2864]: E0129 12:04:00.925945 2864 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:00.926765 kubelet[2864]: I0129 12:04:00.926721 2864 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:00.938768 kubelet[2864]: E0129 12:04:00.938694 2864 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:04:00.938768 kubelet[2864]: I0129 12:04:00.938754 2864 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:04:00.946548 kubelet[2864]: I0129 12:04:00.946443 2864 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:00.948205 kubelet[2864]: I0129 12:04:00.948100 2864 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:04:00.948701 kubelet[2864]: I0129 12:04:00.948608 2864 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:00.949016 kubelet[2864]: I0129 12:04:00.948698 2864 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:00.949273 kubelet[2864]: I0129 12:04:00.949032 2864 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:00.949273 kubelet[2864]: I0129 12:04:00.949055 2864 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:04:00.949424 kubelet[2864]: I0129 12:04:00.949382 2864 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:00.953684 kubelet[2864]: I0129 12:04:00.953614 2864 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:04:00.953684 kubelet[2864]: I0129 12:04:00.953690 2864 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:00.953951 kubelet[2864]: I0129 12:04:00.953774 2864 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:04:00.953951 kubelet[2864]: I0129 12:04:00.953809 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:00.961220 kubelet[2864]: W0129 12:04:00.961135 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:00.961879 kubelet[2864]: E0129 12:04:00.961483 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:00.961879 kubelet[2864]: I0129 12:04:00.961660 2864 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:00.965169 kubelet[2864]: I0129 12:04:00.965003 2864 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:00.967177 kubelet[2864]: W0129 12:04:00.966821 2864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:04:00.969062 kubelet[2864]: I0129 12:04:00.969013 2864 server.go:1269] "Started kubelet" Jan 29 12:04:00.973997 kubelet[2864]: W0129 12:04:00.973894 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:00.974333 kubelet[2864]: E0129 12:04:00.974000 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:00.974333 kubelet[2864]: I0129 12:04:00.974156 2864 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:00.976152 kubelet[2864]: I0129 12:04:00.975721 2864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:00.976786 kubelet[2864]: I0129 12:04:00.976732 2864 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:00.979731 kubelet[2864]: E0129 12:04:00.977303 2864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.93:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.93:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-93.181f284108812746 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-93,UID:ip-172-31-16-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-93,},FirstTimestamp:2025-01-29 12:04:00.968968006 +0000 UTC m=+2.071718061,LastTimestamp:2025-01-29 12:04:00.968968006 +0000 UTC m=+2.071718061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-93,}" Jan 29 12:04:00.986181 kubelet[2864]: I0129 12:04:00.985280 2864 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:04:00.987994 kubelet[2864]: I0129 12:04:00.987937 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:00.993213 kubelet[2864]: E0129 12:04:00.993106 2864 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:00.994019 kubelet[2864]: I0129 12:04:00.993980 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:04:01.002158 kubelet[2864]: E0129 12:04:01.001697 2864 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-93\" not found" Jan 29 12:04:01.006005 kubelet[2864]: I0129 12:04:01.005946 2864 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:01.006257 kubelet[2864]: I0129 12:04:01.006168 2864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:01.006869 kubelet[2864]: E0129 12:04:01.006694 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="200ms" Jan 29 12:04:01.008245 kubelet[2864]: I0129 12:04:01.007724 2864 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:04:01.008245 kubelet[2864]: I0129 12:04:01.007939 2864 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:04:01.008245 kubelet[2864]: I0129 12:04:01.008046 2864 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:01.009620 kubelet[2864]: W0129 12:04:01.009024 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:01.009620 kubelet[2864]: E0129 12:04:01.009147 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:01.010868 kubelet[2864]: I0129 12:04:01.010635 2864 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:01.041220 kubelet[2864]: I0129 12:04:01.041167 2864 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:01.041220 kubelet[2864]: I0129 12:04:01.041202 2864 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:01.041526 kubelet[2864]: I0129 12:04:01.041234 2864 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:01.043636 kubelet[2864]: I0129 12:04:01.043560 2864 policy_none.go:49] "None policy: Start" Jan 29 12:04:01.043934 kubelet[2864]: I0129 12:04:01.043557 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:01.046954 kubelet[2864]: I0129 12:04:01.046897 2864 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:01.046954 kubelet[2864]: I0129 12:04:01.046951 2864 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:01.050473 kubelet[2864]: I0129 12:04:01.050291 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:01.050473 kubelet[2864]: I0129 12:04:01.050344 2864 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:01.050473 kubelet[2864]: I0129 12:04:01.050376 2864 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:04:01.050735 kubelet[2864]: E0129 12:04:01.050492 2864 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:01.053299 kubelet[2864]: W0129 12:04:01.053172 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:01.053299 kubelet[2864]: E0129 12:04:01.053256 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:01.066542 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:04:01.089292 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:04:01.098236 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:04:01.102492 kubelet[2864]: E0129 12:04:01.102423 2864 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-93\" not found" Jan 29 12:04:01.108213 kubelet[2864]: I0129 12:04:01.108015 2864 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:01.108533 kubelet[2864]: I0129 12:04:01.108488 2864 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:01.108627 kubelet[2864]: I0129 12:04:01.108527 2864 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:01.110965 kubelet[2864]: I0129 12:04:01.109105 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:01.114198 kubelet[2864]: E0129 12:04:01.113849 2864 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-93\" not found" Jan 29 12:04:01.173070 systemd[1]: Created slice kubepods-burstable-pod53d7ddb495c690a6d0bdf7f261449fba.slice - libcontainer container kubepods-burstable-pod53d7ddb495c690a6d0bdf7f261449fba.slice. Jan 29 12:04:01.190530 systemd[1]: Created slice kubepods-burstable-pod4551cc0a2436219d2ed42e95885cd6f7.slice - libcontainer container kubepods-burstable-pod4551cc0a2436219d2ed42e95885cd6f7.slice. Jan 29 12:04:01.207891 kubelet[2864]: E0129 12:04:01.207817 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="400ms" Jan 29 12:04:01.210965 kubelet[2864]: I0129 12:04:01.209030 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53d7ddb495c690a6d0bdf7f261449fba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"53d7ddb495c690a6d0bdf7f261449fba\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:01.210965 kubelet[2864]: I0129 12:04:01.209087 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/694fe02bca392ce82cf7b63f03e514b4-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-93\" (UID: \"694fe02bca392ce82cf7b63f03e514b4\") " pod="kube-system/kube-scheduler-ip-172-31-16-93" Jan 29 12:04:01.210965 kubelet[2864]: I0129 12:04:01.209779 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53d7ddb495c690a6d0bdf7f261449fba-ca-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"53d7ddb495c690a6d0bdf7f261449fba\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:01.210965 kubelet[2864]: I0129 12:04:01.209831 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53d7ddb495c690a6d0bdf7f261449fba-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"53d7ddb495c690a6d0bdf7f261449fba\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:01.210965 kubelet[2864]: I0129 12:04:01.209873 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:01.209432 systemd[1]: Created slice kubepods-burstable-pod694fe02bca392ce82cf7b63f03e514b4.slice - libcontainer container kubepods-burstable-pod694fe02bca392ce82cf7b63f03e514b4.slice. Jan 29 12:04:01.211501 kubelet[2864]: I0129 12:04:01.209910 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:01.211501 kubelet[2864]: I0129 12:04:01.209951 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:01.211501 kubelet[2864]: I0129 12:04:01.209994 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:01.211501 kubelet[2864]: I0129 12:04:01.210048 2864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:01.213157 kubelet[2864]: I0129 12:04:01.213073 2864 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-93" Jan 29 12:04:01.213878 kubelet[2864]: E0129 12:04:01.213830 2864 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Jan 29 12:04:01.417207 kubelet[2864]: I0129 12:04:01.417091 2864 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-93" Jan 29 12:04:01.417709 kubelet[2864]: E0129 12:04:01.417621 2864 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Jan 29 12:04:01.487022 containerd[2030]: time="2025-01-29T12:04:01.486373143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-93,Uid:53d7ddb495c690a6d0bdf7f261449fba,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:01.504698 containerd[2030]: time="2025-01-29T12:04:01.504505109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-93,Uid:4551cc0a2436219d2ed42e95885cd6f7,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:01.524492 containerd[2030]: time="2025-01-29T12:04:01.524235455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-93,Uid:694fe02bca392ce82cf7b63f03e514b4,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:01.608910 kubelet[2864]: E0129 12:04:01.608840 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="800ms" Jan 29 12:04:01.821533 kubelet[2864]: I0129 12:04:01.821351 2864 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-93" Jan 29 12:04:01.823093 kubelet[2864]: E0129 12:04:01.822937 2864 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Jan 29 12:04:01.949611 kubelet[2864]: W0129 12:04:01.949456 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:01.949611 kubelet[2864]: E0129 12:04:01.949556 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:01.982983 kubelet[2864]: W0129 12:04:01.982655 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:01.982983 kubelet[2864]: E0129 12:04:01.982794 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:02.019062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958560529.mount: Deactivated successfully. Jan 29 12:04:02.038204 containerd[2030]: time="2025-01-29T12:04:02.037378944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:02.039445 containerd[2030]: time="2025-01-29T12:04:02.039376964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 12:04:02.041679 containerd[2030]: time="2025-01-29T12:04:02.041614193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:02.044761 containerd[2030]: time="2025-01-29T12:04:02.043708598Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:02.046147 containerd[2030]: time="2025-01-29T12:04:02.046055368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:02.048691 containerd[2030]: time="2025-01-29T12:04:02.048619181Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:02.050583 containerd[2030]: time="2025-01-29T12:04:02.050325160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:02.056720 containerd[2030]: time="2025-01-29T12:04:02.056352481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:02.060907 containerd[2030]: time="2025-01-29T12:04:02.060388234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.894994ms" Jan 29 12:04:02.064501 containerd[2030]: time="2025-01-29T12:04:02.064006979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.656227ms" Jan 29 12:04:02.080491 containerd[2030]: time="2025-01-29T12:04:02.080285284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.66267ms" Jan 29 12:04:02.283569 containerd[2030]: time="2025-01-29T12:04:02.282046038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:02.283569 containerd[2030]: time="2025-01-29T12:04:02.283148718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:02.283569 containerd[2030]: time="2025-01-29T12:04:02.283228802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.283569 containerd[2030]: time="2025-01-29T12:04:02.283455909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.292767714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.292851013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.292876212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.293029424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.291073010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.291283613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.291349028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.293406 containerd[2030]: time="2025-01-29T12:04:02.291625322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.342452 systemd[1]: Started cri-containerd-8f5132d2281300d9968caf7db135ae0ad13e0e14851768b7b2b08082f4fc45ba.scope - libcontainer container 8f5132d2281300d9968caf7db135ae0ad13e0e14851768b7b2b08082f4fc45ba. Jan 29 12:04:02.374441 systemd[1]: Started cri-containerd-6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5.scope - libcontainer container 6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5. Jan 29 12:04:02.377613 kubelet[2864]: W0129 12:04:02.377466 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:02.377613 kubelet[2864]: E0129 12:04:02.377552 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:02.381518 systemd[1]: Started cri-containerd-8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768.scope - libcontainer container 8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768. Jan 29 12:04:02.411235 kubelet[2864]: E0129 12:04:02.410843 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="1.6s" Jan 29 12:04:02.414814 kubelet[2864]: W0129 12:04:02.414730 2864 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Jan 29 12:04:02.415025 kubelet[2864]: E0129 12:04:02.414845 2864 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.93:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:02.485249 containerd[2030]: time="2025-01-29T12:04:02.484550947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-93,Uid:53d7ddb495c690a6d0bdf7f261449fba,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f5132d2281300d9968caf7db135ae0ad13e0e14851768b7b2b08082f4fc45ba\"" Jan 29 12:04:02.503150 containerd[2030]: time="2025-01-29T12:04:02.501787453Z" level=info msg="CreateContainer within sandbox \"8f5132d2281300d9968caf7db135ae0ad13e0e14851768b7b2b08082f4fc45ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:04:02.519019 containerd[2030]: time="2025-01-29T12:04:02.518947856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-93,Uid:694fe02bca392ce82cf7b63f03e514b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5\"" Jan 29 12:04:02.528618 containerd[2030]: time="2025-01-29T12:04:02.528543908Z" level=info msg="CreateContainer within sandbox \"6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:04:02.530965 containerd[2030]: time="2025-01-29T12:04:02.530893209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-93,Uid:4551cc0a2436219d2ed42e95885cd6f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768\"" Jan 29 12:04:02.538058 containerd[2030]: time="2025-01-29T12:04:02.537971972Z" level=info msg="CreateContainer within sandbox \"8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:04:02.560901 containerd[2030]: time="2025-01-29T12:04:02.560766461Z" level=info msg="CreateContainer within sandbox \"8f5132d2281300d9968caf7db135ae0ad13e0e14851768b7b2b08082f4fc45ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"857dd59990c53c1b5a16b2826d97da405acb4dcb00bbb4d9d5beaa74f7d36cf1\"" Jan 29 12:04:02.562641 containerd[2030]: time="2025-01-29T12:04:02.562132434Z" level=info msg="StartContainer for \"857dd59990c53c1b5a16b2826d97da405acb4dcb00bbb4d9d5beaa74f7d36cf1\"" Jan 29 12:04:02.596286 containerd[2030]: time="2025-01-29T12:04:02.595745535Z" level=info msg="CreateContainer within sandbox \"6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3\"" Jan 29 12:04:02.599181 containerd[2030]: time="2025-01-29T12:04:02.597705642Z" level=info msg="StartContainer for \"86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3\"" Jan 29 12:04:02.605094 containerd[2030]: time="2025-01-29T12:04:02.604998282Z" level=info msg="CreateContainer within sandbox \"8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515\"" Jan 29 12:04:02.607459 containerd[2030]: time="2025-01-29T12:04:02.607395487Z" level=info msg="StartContainer for \"34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515\"" Jan 29 12:04:02.622496 systemd[1]: Started cri-containerd-857dd59990c53c1b5a16b2826d97da405acb4dcb00bbb4d9d5beaa74f7d36cf1.scope - libcontainer container 857dd59990c53c1b5a16b2826d97da405acb4dcb00bbb4d9d5beaa74f7d36cf1. Jan 29 12:04:02.631944 kubelet[2864]: I0129 12:04:02.631882 2864 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-93" Jan 29 12:04:02.633699 kubelet[2864]: E0129 12:04:02.633604 2864 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Jan 29 12:04:02.692821 systemd[1]: Started cri-containerd-34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515.scope - libcontainer container 34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515. Jan 29 12:04:02.722231 systemd[1]: Started cri-containerd-86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3.scope - libcontainer container 86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3. Jan 29 12:04:02.777439 containerd[2030]: time="2025-01-29T12:04:02.777330161Z" level=info msg="StartContainer for \"857dd59990c53c1b5a16b2826d97da405acb4dcb00bbb4d9d5beaa74f7d36cf1\" returns successfully" Jan 29 12:04:02.837003 containerd[2030]: time="2025-01-29T12:04:02.836901457Z" level=info msg="StartContainer for \"34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515\" returns successfully" Jan 29 12:04:02.910908 containerd[2030]: time="2025-01-29T12:04:02.910617877Z" level=info msg="StartContainer for \"86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3\" returns successfully" Jan 29 12:04:04.238458 kubelet[2864]: I0129 12:04:04.238391 2864 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-93" Jan 29 12:04:06.488428 update_engine[2006]: I20250129 12:04:06.488216 2006 update_attempter.cc:509] Updating boot flags... Jan 29 12:04:06.669245 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3152) Jan 29 12:04:06.976832 kubelet[2864]: I0129 12:04:06.976771 2864 apiserver.go:52] "Watching apiserver" Jan 29 12:04:07.112240 kubelet[2864]: I0129 12:04:07.109806 2864 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:04:07.121259 kubelet[2864]: I0129 12:04:07.118083 2864 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-93" Jan 29 12:04:07.121259 kubelet[2864]: E0129 12:04:07.118173 2864 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-93\": node \"ip-172-31-16-93\" not found" Jan 29 12:04:07.248253 kubelet[2864]: E0129 12:04:07.247578 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 29 12:04:07.299172 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3154) Jan 29 12:04:09.376995 systemd[1]: Reloading requested from client PID 3322 ('systemctl') (unit session-7.scope)... Jan 29 12:04:09.377583 systemd[1]: Reloading... Jan 29 12:04:09.670152 zram_generator::config[3365]: No configuration found. Jan 29 12:04:10.054558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:10.309966 systemd[1]: Reloading finished in 931 ms. Jan 29 12:04:10.420586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:10.435340 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:04:10.436777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:10.437053 systemd[1]: kubelet.service: Consumed 2.872s CPU time, 116.1M memory peak, 0B memory swap peak. Jan 29 12:04:10.452924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:10.783635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:10.802077 (kubelet)[3422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:10.923367 kubelet[3422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:10.923367 kubelet[3422]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:10.923367 kubelet[3422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:10.923367 kubelet[3422]: I0129 12:04:10.922219 3422 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:10.959331 kubelet[3422]: I0129 12:04:10.958593 3422 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:04:10.959331 kubelet[3422]: I0129 12:04:10.958714 3422 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:10.959331 kubelet[3422]: I0129 12:04:10.959225 3422 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:04:10.964811 kubelet[3422]: I0129 12:04:10.964770 3422 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:04:10.970731 kubelet[3422]: I0129 12:04:10.970683 3422 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:10.979537 kubelet[3422]: E0129 12:04:10.979478 3422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:04:10.979537 kubelet[3422]: I0129 12:04:10.979531 3422 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:04:10.984138 kubelet[3422]: I0129 12:04:10.984081 3422 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:10.984602 kubelet[3422]: I0129 12:04:10.984323 3422 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:04:10.984602 kubelet[3422]: I0129 12:04:10.984547 3422 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:10.984882 kubelet[3422]: I0129 12:04:10.984590 3422 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:10.985035 kubelet[3422]: I0129 12:04:10.984892 3422 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:10.985035 kubelet[3422]: I0129 12:04:10.984914 3422 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:04:10.985035 kubelet[3422]: I0129 12:04:10.984969 3422 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:10.985276 kubelet[3422]: I0129 12:04:10.985187 3422 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:04:10.988178 kubelet[3422]: I0129 12:04:10.985214 3422 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:10.988922 kubelet[3422]: I0129 12:04:10.988217 3422 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:04:10.988922 kubelet[3422]: I0129 12:04:10.988254 3422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:10.991868 kubelet[3422]: I0129 12:04:10.991669 3422 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:10.993805 kubelet[3422]: I0129 12:04:10.992477 3422 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:11.009160 kubelet[3422]: I0129 12:04:11.008400 3422 server.go:1269] "Started kubelet" Jan 29 12:04:11.016219 kubelet[3422]: I0129 12:04:11.008662 3422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:11.016219 kubelet[3422]: I0129 12:04:11.013638 3422 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:11.016219 kubelet[3422]: I0129 12:04:11.012245 3422 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:11.017043 kubelet[3422]: I0129 12:04:11.017002 3422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:11.035282 kubelet[3422]: I0129 12:04:11.034382 3422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:04:11.036720 kubelet[3422]: I0129 12:04:11.035733 3422 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:04:11.040822 kubelet[3422]: E0129 12:04:11.040311 3422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-93\" not found" Jan 29 12:04:11.046571 kubelet[3422]: I0129 12:04:11.044660 3422 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:04:11.046571 kubelet[3422]: I0129 12:04:11.044919 3422 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:11.046571 kubelet[3422]: I0129 12:04:11.045569 3422 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:04:11.067178 kubelet[3422]: I0129 12:04:11.067004 3422 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:11.067465 kubelet[3422]: I0129 12:04:11.067187 3422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:11.079419 kubelet[3422]: I0129 12:04:11.078241 3422 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:11.100402 kubelet[3422]: E0129 12:04:11.096534 3422 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:11.112797 kubelet[3422]: I0129 12:04:11.112593 3422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:11.121263 kubelet[3422]: I0129 12:04:11.120417 3422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:11.121263 kubelet[3422]: I0129 12:04:11.120488 3422 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:11.121263 kubelet[3422]: I0129 12:04:11.120536 3422 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:04:11.121263 kubelet[3422]: E0129 12:04:11.120668 3422 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:11.225744 kubelet[3422]: E0129 12:04:11.225530 3422 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:04:11.241935 kubelet[3422]: I0129 12:04:11.241847 3422 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:11.241935 kubelet[3422]: I0129 12:04:11.241884 3422 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:11.241935 kubelet[3422]: I0129 12:04:11.241922 3422 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:11.242257 kubelet[3422]: I0129 12:04:11.242219 3422 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:04:11.242337 kubelet[3422]: I0129 12:04:11.242251 3422 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:04:11.242337 kubelet[3422]: I0129 12:04:11.242305 3422 policy_none.go:49] "None policy: Start" Jan 29 12:04:11.246932 kubelet[3422]: I0129 12:04:11.244029 3422 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:11.246932 kubelet[3422]: I0129 12:04:11.244077 3422 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:11.246932 kubelet[3422]: I0129 12:04:11.244498 3422 state_mem.go:75] "Updated machine memory state" Jan 29 12:04:11.256431 kubelet[3422]: I0129 12:04:11.256380 3422 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:11.257391 kubelet[3422]: I0129 12:04:11.256690 3422 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:11.257391 kubelet[3422]: I0129 12:04:11.256732 3422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:11.263462 kubelet[3422]: I0129 12:04:11.262327 3422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:11.380970 kubelet[3422]: I0129 12:04:11.379448 3422 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-93" Jan 29 12:04:11.390748 kubelet[3422]: I0129 12:04:11.390678 3422 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-16-93" Jan 29 12:04:11.390920 kubelet[3422]: I0129 12:04:11.390814 3422 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-93" Jan 29 12:04:11.448822 kubelet[3422]: I0129 12:04:11.448383 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:11.448822 kubelet[3422]: I0129 12:04:11.448466 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:11.448822 kubelet[3422]: I0129 12:04:11.448510 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:11.448822 kubelet[3422]: I0129 12:04:11.448551 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53d7ddb495c690a6d0bdf7f261449fba-ca-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"53d7ddb495c690a6d0bdf7f261449fba\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:11.448822 kubelet[3422]: I0129 12:04:11.448592 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53d7ddb495c690a6d0bdf7f261449fba-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"53d7ddb495c690a6d0bdf7f261449fba\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:11.449600 kubelet[3422]: I0129 12:04:11.448643 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53d7ddb495c690a6d0bdf7f261449fba-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"53d7ddb495c690a6d0bdf7f261449fba\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:11.449600 kubelet[3422]: I0129 12:04:11.448692 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:11.449600 kubelet[3422]: I0129 12:04:11.448743 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4551cc0a2436219d2ed42e95885cd6f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"4551cc0a2436219d2ed42e95885cd6f7\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Jan 29 12:04:11.452704 kubelet[3422]: I0129 12:04:11.452618 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/694fe02bca392ce82cf7b63f03e514b4-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-93\" (UID: \"694fe02bca392ce82cf7b63f03e514b4\") " pod="kube-system/kube-scheduler-ip-172-31-16-93" Jan 29 12:04:11.991132 kubelet[3422]: I0129 12:04:11.991032 3422 apiserver.go:52] "Watching apiserver" Jan 29 12:04:12.046168 kubelet[3422]: I0129 12:04:12.045397 3422 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:04:12.218275 kubelet[3422]: E0129 12:04:12.218187 3422 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-93\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-93" Jan 29 12:04:12.238794 kubelet[3422]: I0129 12:04:12.238378 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-93" podStartSLOduration=1.238352988 podStartE2EDuration="1.238352988s" podCreationTimestamp="2025-01-29 12:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:12.213911866 +0000 UTC m=+1.393291651" watchObservedRunningTime="2025-01-29 12:04:12.238352988 +0000 UTC m=+1.417732749" Jan 29 12:04:12.254923 kubelet[3422]: I0129 12:04:12.254696 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-93" podStartSLOduration=1.25467621 podStartE2EDuration="1.25467621s" podCreationTimestamp="2025-01-29 12:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:12.23997646 +0000 UTC m=+1.419356221" watchObservedRunningTime="2025-01-29 12:04:12.25467621 +0000 UTC m=+1.434055983" Jan 29 12:04:12.286909 kubelet[3422]: I0129 12:04:12.286816 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-93" podStartSLOduration=1.286777504 podStartE2EDuration="1.286777504s" podCreationTimestamp="2025-01-29 12:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:12.256393415 +0000 UTC m=+1.435773176" watchObservedRunningTime="2025-01-29 12:04:12.286777504 +0000 UTC m=+1.466157253" Jan 29 12:04:14.607731 kubelet[3422]: I0129 12:04:14.606865 3422 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:04:14.616227 containerd[2030]: time="2025-01-29T12:04:14.611362932Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:04:14.616967 kubelet[3422]: I0129 12:04:14.612278 3422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:04:15.578641 systemd[1]: Created slice kubepods-besteffort-podd1f59e5a_5fbb_4f01_ae86_8c94f9650af5.slice - libcontainer container kubepods-besteffort-podd1f59e5a_5fbb_4f01_ae86_8c94f9650af5.slice. Jan 29 12:04:15.581425 kubelet[3422]: I0129 12:04:15.579598 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1f59e5a-5fbb-4f01-ae86-8c94f9650af5-kube-proxy\") pod \"kube-proxy-g6s2g\" (UID: \"d1f59e5a-5fbb-4f01-ae86-8c94f9650af5\") " pod="kube-system/kube-proxy-g6s2g" Jan 29 12:04:15.581425 kubelet[3422]: I0129 12:04:15.579776 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1f59e5a-5fbb-4f01-ae86-8c94f9650af5-lib-modules\") pod \"kube-proxy-g6s2g\" (UID: \"d1f59e5a-5fbb-4f01-ae86-8c94f9650af5\") " pod="kube-system/kube-proxy-g6s2g" Jan 29 12:04:15.581425 kubelet[3422]: I0129 12:04:15.579824 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdj5r\" (UniqueName: \"kubernetes.io/projected/d1f59e5a-5fbb-4f01-ae86-8c94f9650af5-kube-api-access-mdj5r\") pod \"kube-proxy-g6s2g\" (UID: \"d1f59e5a-5fbb-4f01-ae86-8c94f9650af5\") " pod="kube-system/kube-proxy-g6s2g" Jan 29 12:04:15.581425 kubelet[3422]: I0129 12:04:15.579869 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1f59e5a-5fbb-4f01-ae86-8c94f9650af5-xtables-lock\") pod \"kube-proxy-g6s2g\" (UID: \"d1f59e5a-5fbb-4f01-ae86-8c94f9650af5\") " pod="kube-system/kube-proxy-g6s2g" Jan 29 12:04:15.837069 systemd[1]: Created slice kubepods-besteffort-pod7dd47a36_36d6_48de_98fa_4b5c4bfd6175.slice - libcontainer container kubepods-besteffort-pod7dd47a36_36d6_48de_98fa_4b5c4bfd6175.slice. Jan 29 12:04:15.881930 kubelet[3422]: I0129 12:04:15.881861 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7dd47a36-36d6-48de-98fa-4b5c4bfd6175-var-lib-calico\") pod \"tigera-operator-76c4976dd7-w9bpt\" (UID: \"7dd47a36-36d6-48de-98fa-4b5c4bfd6175\") " pod="tigera-operator/tigera-operator-76c4976dd7-w9bpt" Jan 29 12:04:15.881930 kubelet[3422]: I0129 12:04:15.881937 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmrm\" (UniqueName: \"kubernetes.io/projected/7dd47a36-36d6-48de-98fa-4b5c4bfd6175-kube-api-access-8mmrm\") pod \"tigera-operator-76c4976dd7-w9bpt\" (UID: \"7dd47a36-36d6-48de-98fa-4b5c4bfd6175\") " pod="tigera-operator/tigera-operator-76c4976dd7-w9bpt" Jan 29 12:04:15.893907 containerd[2030]: time="2025-01-29T12:04:15.892836151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g6s2g,Uid:d1f59e5a-5fbb-4f01-ae86-8c94f9650af5,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:15.972197 containerd[2030]: time="2025-01-29T12:04:15.969770633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:15.972197 containerd[2030]: time="2025-01-29T12:04:15.969886267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:15.972197 containerd[2030]: time="2025-01-29T12:04:15.969920570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:15.972197 containerd[2030]: time="2025-01-29T12:04:15.970090885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:16.043861 systemd[1]: Started cri-containerd-4c7d634a8ce72b023e302a0425b1304fa9da9cd2880f02ed3e6b7bcab9d1a393.scope - libcontainer container 4c7d634a8ce72b023e302a0425b1304fa9da9cd2880f02ed3e6b7bcab9d1a393. Jan 29 12:04:16.386767 containerd[2030]: time="2025-01-29T12:04:16.386401786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g6s2g,Uid:d1f59e5a-5fbb-4f01-ae86-8c94f9650af5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c7d634a8ce72b023e302a0425b1304fa9da9cd2880f02ed3e6b7bcab9d1a393\"" Jan 29 12:04:16.398428 containerd[2030]: time="2025-01-29T12:04:16.398256152Z" level=info msg="CreateContainer within sandbox \"4c7d634a8ce72b023e302a0425b1304fa9da9cd2880f02ed3e6b7bcab9d1a393\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:04:16.432230 containerd[2030]: time="2025-01-29T12:04:16.432154315Z" level=info msg="CreateContainer within sandbox \"4c7d634a8ce72b023e302a0425b1304fa9da9cd2880f02ed3e6b7bcab9d1a393\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"024127648b47fc05e1c31807487331a2aac68c3e7460bae2df20d855eb65128e\"" Jan 29 12:04:16.436968 containerd[2030]: time="2025-01-29T12:04:16.436769618Z" level=info msg="StartContainer for \"024127648b47fc05e1c31807487331a2aac68c3e7460bae2df20d855eb65128e\"" Jan 29 12:04:16.447172 containerd[2030]: time="2025-01-29T12:04:16.446943133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-w9bpt,Uid:7dd47a36-36d6-48de-98fa-4b5c4bfd6175,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:04:16.565878 containerd[2030]: time="2025-01-29T12:04:16.564670924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:16.565878 containerd[2030]: time="2025-01-29T12:04:16.564812980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:16.565878 containerd[2030]: time="2025-01-29T12:04:16.564852645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:16.565878 containerd[2030]: time="2025-01-29T12:04:16.565027961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:16.591650 systemd[1]: Started cri-containerd-024127648b47fc05e1c31807487331a2aac68c3e7460bae2df20d855eb65128e.scope - libcontainer container 024127648b47fc05e1c31807487331a2aac68c3e7460bae2df20d855eb65128e. Jan 29 12:04:16.634453 systemd[1]: Started cri-containerd-2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42.scope - libcontainer container 2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42. Jan 29 12:04:16.715195 containerd[2030]: time="2025-01-29T12:04:16.713851181Z" level=info msg="StartContainer for \"024127648b47fc05e1c31807487331a2aac68c3e7460bae2df20d855eb65128e\" returns successfully" Jan 29 12:04:16.839544 containerd[2030]: time="2025-01-29T12:04:16.839451509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-w9bpt,Uid:7dd47a36-36d6-48de-98fa-4b5c4bfd6175,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42\"" Jan 29 12:04:16.849054 containerd[2030]: time="2025-01-29T12:04:16.849002128Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:04:17.624694 sudo[2355]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:17.648518 sshd[2352]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:17.658693 systemd-logind[2005]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:04:17.660916 systemd[1]: sshd@6-172.31.16.93:22-139.178.89.65:39986.service: Deactivated successfully. Jan 29 12:04:17.668742 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:04:17.669061 systemd[1]: session-7.scope: Consumed 11.001s CPU time, 151.6M memory peak, 0B memory swap peak. Jan 29 12:04:17.670851 systemd-logind[2005]: Removed session 7. Jan 29 12:04:18.515517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307928205.mount: Deactivated successfully. Jan 29 12:04:19.158244 containerd[2030]: time="2025-01-29T12:04:19.158181347Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:19.160096 containerd[2030]: time="2025-01-29T12:04:19.160034876Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 12:04:19.161053 containerd[2030]: time="2025-01-29T12:04:19.160897293Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:19.165392 containerd[2030]: time="2025-01-29T12:04:19.165231708Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:19.169190 containerd[2030]: time="2025-01-29T12:04:19.167542353Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.318424891s" Jan 29 12:04:19.169190 containerd[2030]: time="2025-01-29T12:04:19.167647072Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 12:04:19.177685 containerd[2030]: time="2025-01-29T12:04:19.177568617Z" level=info msg="CreateContainer within sandbox \"2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:04:19.206185 containerd[2030]: time="2025-01-29T12:04:19.206044076Z" level=info msg="CreateContainer within sandbox \"2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6\"" Jan 29 12:04:19.207570 containerd[2030]: time="2025-01-29T12:04:19.207441893Z" level=info msg="StartContainer for \"1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6\"" Jan 29 12:04:19.275621 systemd[1]: Started cri-containerd-1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6.scope - libcontainer container 1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6. Jan 29 12:04:19.333828 containerd[2030]: time="2025-01-29T12:04:19.333612896Z" level=info msg="StartContainer for \"1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6\" returns successfully" Jan 29 12:04:20.256915 kubelet[3422]: I0129 12:04:20.256322 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g6s2g" podStartSLOduration=5.25629695 podStartE2EDuration="5.25629695s" podCreationTimestamp="2025-01-29 12:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:17.28402277 +0000 UTC m=+6.463402555" watchObservedRunningTime="2025-01-29 12:04:20.25629695 +0000 UTC m=+9.435676699" Jan 29 12:04:20.256915 kubelet[3422]: I0129 12:04:20.256540 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-w9bpt" podStartSLOduration=2.930032061 podStartE2EDuration="5.256526623s" podCreationTimestamp="2025-01-29 12:04:15 +0000 UTC" firstStartedPulling="2025-01-29 12:04:16.844785661 +0000 UTC m=+6.024165422" lastFinishedPulling="2025-01-29 12:04:19.171280235 +0000 UTC m=+8.350659984" observedRunningTime="2025-01-29 12:04:20.25619301 +0000 UTC m=+9.435572771" watchObservedRunningTime="2025-01-29 12:04:20.256526623 +0000 UTC m=+9.435906396" Jan 29 12:04:24.145838 systemd[1]: Created slice kubepods-besteffort-podc55e3514_a60d_4066_8252_c11d3ccfb2ff.slice - libcontainer container kubepods-besteffort-podc55e3514_a60d_4066_8252_c11d3ccfb2ff.slice. Jan 29 12:04:24.152132 kubelet[3422]: I0129 12:04:24.151663 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c55e3514-a60d-4066-8252-c11d3ccfb2ff-tigera-ca-bundle\") pod \"calico-typha-7c846d4555-4fttl\" (UID: \"c55e3514-a60d-4066-8252-c11d3ccfb2ff\") " pod="calico-system/calico-typha-7c846d4555-4fttl" Jan 29 12:04:24.152132 kubelet[3422]: I0129 12:04:24.151737 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9b5c\" (UniqueName: \"kubernetes.io/projected/c55e3514-a60d-4066-8252-c11d3ccfb2ff-kube-api-access-b9b5c\") pod \"calico-typha-7c846d4555-4fttl\" (UID: \"c55e3514-a60d-4066-8252-c11d3ccfb2ff\") " pod="calico-system/calico-typha-7c846d4555-4fttl" Jan 29 12:04:24.152132 kubelet[3422]: I0129 12:04:24.151849 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c55e3514-a60d-4066-8252-c11d3ccfb2ff-typha-certs\") pod \"calico-typha-7c846d4555-4fttl\" (UID: \"c55e3514-a60d-4066-8252-c11d3ccfb2ff\") " pod="calico-system/calico-typha-7c846d4555-4fttl" Jan 29 12:04:24.428828 systemd[1]: Created slice kubepods-besteffort-pod2607ee26_daa8_4eca_99ad_e3d71a57d82f.slice - libcontainer container kubepods-besteffort-pod2607ee26_daa8_4eca_99ad_e3d71a57d82f.slice. Jan 29 12:04:24.454207 kubelet[3422]: I0129 12:04:24.454133 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-cni-log-dir\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454393 kubelet[3422]: I0129 12:04:24.454217 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-policysync\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454393 kubelet[3422]: I0129 12:04:24.454256 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khl6\" (UniqueName: \"kubernetes.io/projected/2607ee26-daa8-4eca-99ad-e3d71a57d82f-kube-api-access-7khl6\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454393 kubelet[3422]: I0129 12:04:24.454299 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-lib-modules\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454393 kubelet[3422]: I0129 12:04:24.454336 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-cni-bin-dir\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454393 kubelet[3422]: I0129 12:04:24.454374 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-var-lib-calico\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454654 kubelet[3422]: I0129 12:04:24.454413 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2607ee26-daa8-4eca-99ad-e3d71a57d82f-tigera-ca-bundle\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454654 kubelet[3422]: I0129 12:04:24.454451 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2607ee26-daa8-4eca-99ad-e3d71a57d82f-node-certs\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454654 kubelet[3422]: I0129 12:04:24.454486 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-flexvol-driver-host\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454654 kubelet[3422]: I0129 12:04:24.454524 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-xtables-lock\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454654 kubelet[3422]: I0129 12:04:24.454561 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-var-run-calico\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.454917 kubelet[3422]: I0129 12:04:24.454597 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2607ee26-daa8-4eca-99ad-e3d71a57d82f-cni-net-dir\") pod \"calico-node-tdbgk\" (UID: \"2607ee26-daa8-4eca-99ad-e3d71a57d82f\") " pod="calico-system/calico-node-tdbgk" Jan 29 12:04:24.456676 containerd[2030]: time="2025-01-29T12:04:24.456625736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c846d4555-4fttl,Uid:c55e3514-a60d-4066-8252-c11d3ccfb2ff,Namespace:calico-system,Attempt:0,}" Jan 29 12:04:24.533060 containerd[2030]: time="2025-01-29T12:04:24.532055630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:24.533060 containerd[2030]: time="2025-01-29T12:04:24.532341483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:24.533060 containerd[2030]: time="2025-01-29T12:04:24.532502814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:24.535183 containerd[2030]: time="2025-01-29T12:04:24.534397518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:24.584277 kubelet[3422]: E0129 12:04:24.583282 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.584277 kubelet[3422]: W0129 12:04:24.583639 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.584277 kubelet[3422]: E0129 12:04:24.583846 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.595039 kubelet[3422]: E0129 12:04:24.594076 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.595039 kubelet[3422]: W0129 12:04:24.594138 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.596336 kubelet[3422]: E0129 12:04:24.596278 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.600326 kubelet[3422]: E0129 12:04:24.599883 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.600326 kubelet[3422]: W0129 12:04:24.599919 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.600326 kubelet[3422]: E0129 12:04:24.599987 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.604426 kubelet[3422]: E0129 12:04:24.603428 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.604426 kubelet[3422]: W0129 12:04:24.604315 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.604968 kubelet[3422]: E0129 12:04:24.604897 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.608715 kubelet[3422]: E0129 12:04:24.608649 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.608715 kubelet[3422]: W0129 12:04:24.608710 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.609401 kubelet[3422]: E0129 12:04:24.608791 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.616710 kubelet[3422]: E0129 12:04:24.612257 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.616710 kubelet[3422]: W0129 12:04:24.612332 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.619556 kubelet[3422]: E0129 12:04:24.618010 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.619556 kubelet[3422]: E0129 12:04:24.619476 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.619556 kubelet[3422]: W0129 12:04:24.619513 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.621840 kubelet[3422]: E0129 12:04:24.621776 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.621840 kubelet[3422]: W0129 12:04:24.621819 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.624385 systemd[1]: Started cri-containerd-fc90a8709205f6ff52c5bbd804dbd1291954338998735e818ed45c30db336ff2.scope - libcontainer container fc90a8709205f6ff52c5bbd804dbd1291954338998735e818ed45c30db336ff2. Jan 29 12:04:24.626670 kubelet[3422]: E0129 12:04:24.625800 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.626670 kubelet[3422]: E0129 12:04:24.625880 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.626670 kubelet[3422]: E0129 12:04:24.625534 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.630948 kubelet[3422]: W0129 12:04:24.627097 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.630948 kubelet[3422]: E0129 12:04:24.630018 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.632775 kubelet[3422]: E0129 12:04:24.632698 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.632916 kubelet[3422]: W0129 12:04:24.632887 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.633207 kubelet[3422]: E0129 12:04:24.633096 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.636825 kubelet[3422]: E0129 12:04:24.635971 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.637525 kubelet[3422]: W0129 12:04:24.636970 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.637525 kubelet[3422]: E0129 12:04:24.637091 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.639217 kubelet[3422]: E0129 12:04:24.639014 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.639217 kubelet[3422]: W0129 12:04:24.639206 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.640239 kubelet[3422]: E0129 12:04:24.639555 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.641586 kubelet[3422]: E0129 12:04:24.641367 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.641586 kubelet[3422]: W0129 12:04:24.641570 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.642943 kubelet[3422]: E0129 12:04:24.642861 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.643418 kubelet[3422]: E0129 12:04:24.643222 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.643541 kubelet[3422]: W0129 12:04:24.643407 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.643991 kubelet[3422]: E0129 12:04:24.643920 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.647072 kubelet[3422]: E0129 12:04:24.646796 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.647072 kubelet[3422]: W0129 12:04:24.646829 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.647320 kubelet[3422]: E0129 12:04:24.647093 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.647841 kubelet[3422]: E0129 12:04:24.647738 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.647841 kubelet[3422]: W0129 12:04:24.647828 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.648022 kubelet[3422]: E0129 12:04:24.648003 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.648803 kubelet[3422]: E0129 12:04:24.648735 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.648938 kubelet[3422]: W0129 12:04:24.648780 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.648938 kubelet[3422]: E0129 12:04:24.648913 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.650967 kubelet[3422]: E0129 12:04:24.650906 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.651165 kubelet[3422]: W0129 12:04:24.650980 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.651165 kubelet[3422]: E0129 12:04:24.651079 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.652366 kubelet[3422]: E0129 12:04:24.651794 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.652586 kubelet[3422]: W0129 12:04:24.651874 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.652696 kubelet[3422]: E0129 12:04:24.652613 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.655076 kubelet[3422]: E0129 12:04:24.654653 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.655076 kubelet[3422]: W0129 12:04:24.654770 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.655076 kubelet[3422]: E0129 12:04:24.654911 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.656786 kubelet[3422]: E0129 12:04:24.656725 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.656786 kubelet[3422]: W0129 12:04:24.656769 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.656786 kubelet[3422]: E0129 12:04:24.656933 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.658317 kubelet[3422]: E0129 12:04:24.657283 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.658317 kubelet[3422]: W0129 12:04:24.657306 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.658317 kubelet[3422]: E0129 12:04:24.657709 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.658317 kubelet[3422]: W0129 12:04:24.657745 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.658317 kubelet[3422]: E0129 12:04:24.658045 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.658317 kubelet[3422]: E0129 12:04:24.658168 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.659556 kubelet[3422]: E0129 12:04:24.659517 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.659556 kubelet[3422]: W0129 12:04:24.659546 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.659927 kubelet[3422]: E0129 12:04:24.659863 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.661165 kubelet[3422]: E0129 12:04:24.660418 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.661165 kubelet[3422]: W0129 12:04:24.660472 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.661165 kubelet[3422]: E0129 12:04:24.660968 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.661165 kubelet[3422]: W0129 12:04:24.660994 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.662087 kubelet[3422]: E0129 12:04:24.661571 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.662087 kubelet[3422]: E0129 12:04:24.661767 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.662664 kubelet[3422]: E0129 12:04:24.662541 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.662775 kubelet[3422]: W0129 12:04:24.662655 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.663844 kubelet[3422]: E0129 12:04:24.662944 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.664032 kubelet[3422]: E0129 12:04:24.663845 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.664032 kubelet[3422]: W0129 12:04:24.663887 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.665278 kubelet[3422]: E0129 12:04:24.665209 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.665699 kubelet[3422]: E0129 12:04:24.665616 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.665699 kubelet[3422]: W0129 12:04:24.665673 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.665699 kubelet[3422]: E0129 12:04:24.665785 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.666940 kubelet[3422]: E0129 12:04:24.666843 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.666940 kubelet[3422]: W0129 12:04:24.666871 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.668635 kubelet[3422]: E0129 12:04:24.667347 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.668635 kubelet[3422]: E0129 12:04:24.667356 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.668635 kubelet[3422]: W0129 12:04:24.667436 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.668635 kubelet[3422]: E0129 12:04:24.667499 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.669138 kubelet[3422]: E0129 12:04:24.669086 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.670270 kubelet[3422]: W0129 12:04:24.670197 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.670566 kubelet[3422]: E0129 12:04:24.670517 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.671244 kubelet[3422]: E0129 12:04:24.671207 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.671971 kubelet[3422]: W0129 12:04:24.671889 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.673783 kubelet[3422]: E0129 12:04:24.673478 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.674194 kubelet[3422]: E0129 12:04:24.674150 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.674791 kubelet[3422]: W0129 12:04:24.674337 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.674791 kubelet[3422]: E0129 12:04:24.674426 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.676464 kubelet[3422]: E0129 12:04:24.676422 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.676728 kubelet[3422]: W0129 12:04:24.676691 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.677343 kubelet[3422]: E0129 12:04:24.676937 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.678324 kubelet[3422]: E0129 12:04:24.678279 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.678679 kubelet[3422]: W0129 12:04:24.678538 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.678679 kubelet[3422]: E0129 12:04:24.678587 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.718761 kubelet[3422]: E0129 12:04:24.718502 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.718761 kubelet[3422]: W0129 12:04:24.718543 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.718761 kubelet[3422]: E0129 12:04:24.718584 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.735681 containerd[2030]: time="2025-01-29T12:04:24.735608635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdbgk,Uid:2607ee26-daa8-4eca-99ad-e3d71a57d82f,Namespace:calico-system,Attempt:0,}" Jan 29 12:04:24.764632 kubelet[3422]: E0129 12:04:24.763147 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:24.821859 containerd[2030]: time="2025-01-29T12:04:24.821517328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:24.821859 containerd[2030]: time="2025-01-29T12:04:24.821624662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:24.821859 containerd[2030]: time="2025-01-29T12:04:24.821734023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:24.824145 containerd[2030]: time="2025-01-29T12:04:24.822061315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:24.839499 kubelet[3422]: E0129 12:04:24.839449 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.840205 kubelet[3422]: W0129 12:04:24.840159 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.841264 kubelet[3422]: E0129 12:04:24.840342 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.842623 kubelet[3422]: E0129 12:04:24.841884 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.842623 kubelet[3422]: W0129 12:04:24.842336 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.842623 kubelet[3422]: E0129 12:04:24.842390 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.845161 kubelet[3422]: E0129 12:04:24.843806 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.845591 kubelet[3422]: W0129 12:04:24.845535 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.845771 kubelet[3422]: E0129 12:04:24.845741 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.846556 kubelet[3422]: E0129 12:04:24.846380 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.848086 kubelet[3422]: W0129 12:04:24.847022 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.848086 kubelet[3422]: E0129 12:04:24.847083 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.849610 kubelet[3422]: E0129 12:04:24.848794 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.849610 kubelet[3422]: W0129 12:04:24.848834 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.849610 kubelet[3422]: E0129 12:04:24.848874 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.851647 kubelet[3422]: E0129 12:04:24.851460 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.853011 kubelet[3422]: W0129 12:04:24.852741 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.853011 kubelet[3422]: E0129 12:04:24.852807 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.855349 kubelet[3422]: E0129 12:04:24.854760 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.855349 kubelet[3422]: W0129 12:04:24.854941 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.855349 kubelet[3422]: E0129 12:04:24.854984 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.858774 kubelet[3422]: E0129 12:04:24.857753 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.858774 kubelet[3422]: W0129 12:04:24.857797 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.858774 kubelet[3422]: E0129 12:04:24.857833 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.860781 kubelet[3422]: E0129 12:04:24.860735 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.860979 kubelet[3422]: W0129 12:04:24.860943 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.861880 kubelet[3422]: E0129 12:04:24.861141 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.863508 kubelet[3422]: E0129 12:04:24.862202 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.864286 kubelet[3422]: W0129 12:04:24.863972 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.864286 kubelet[3422]: E0129 12:04:24.864030 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.866762 kubelet[3422]: E0129 12:04:24.866362 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.866762 kubelet[3422]: W0129 12:04:24.866415 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.866762 kubelet[3422]: E0129 12:04:24.866451 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.869812 kubelet[3422]: E0129 12:04:24.869584 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.870815 kubelet[3422]: W0129 12:04:24.870758 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.873203 kubelet[3422]: E0129 12:04:24.871467 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.875559 kubelet[3422]: E0129 12:04:24.874474 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.875559 kubelet[3422]: W0129 12:04:24.874509 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.875559 kubelet[3422]: E0129 12:04:24.874544 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.878353 kubelet[3422]: E0129 12:04:24.878288 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.879161 kubelet[3422]: W0129 12:04:24.878835 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.879161 kubelet[3422]: E0129 12:04:24.878890 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.879785 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.881578 kubelet[3422]: W0129 12:04:24.879828 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.879864 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.880436 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.881578 kubelet[3422]: W0129 12:04:24.880465 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.880499 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.880969 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.881578 kubelet[3422]: W0129 12:04:24.880993 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.881022 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.881578 kubelet[3422]: E0129 12:04:24.881430 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.882280 kubelet[3422]: W0129 12:04:24.881457 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.882280 kubelet[3422]: E0129 12:04:24.881493 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.883735 kubelet[3422]: E0129 12:04:24.883264 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.883735 kubelet[3422]: W0129 12:04:24.883313 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.883735 kubelet[3422]: E0129 12:04:24.883360 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.887086 kubelet[3422]: E0129 12:04:24.883871 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.887086 kubelet[3422]: W0129 12:04:24.883899 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.887086 kubelet[3422]: E0129 12:04:24.883932 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.887086 kubelet[3422]: E0129 12:04:24.884613 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.887086 kubelet[3422]: W0129 12:04:24.884643 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.887086 kubelet[3422]: E0129 12:04:24.884674 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.887086 kubelet[3422]: I0129 12:04:24.884730 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f476b9f4-bc43-439e-8ffa-3de6f582cec3-registration-dir\") pod \"csi-node-driver-7gvnp\" (UID: \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\") " pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:24.887086 kubelet[3422]: E0129 12:04:24.885284 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.887086 kubelet[3422]: W0129 12:04:24.885313 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.888537 kubelet[3422]: E0129 12:04:24.885378 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.888537 kubelet[3422]: I0129 12:04:24.885602 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f476b9f4-bc43-439e-8ffa-3de6f582cec3-socket-dir\") pod \"csi-node-driver-7gvnp\" (UID: \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\") " pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:24.888537 kubelet[3422]: E0129 12:04:24.885923 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.888537 kubelet[3422]: W0129 12:04:24.885947 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.888537 kubelet[3422]: E0129 12:04:24.885989 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.888537 kubelet[3422]: E0129 12:04:24.886468 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.888537 kubelet[3422]: W0129 12:04:24.886496 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.888537 kubelet[3422]: E0129 12:04:24.886627 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.888940 kubelet[3422]: E0129 12:04:24.888646 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.888940 kubelet[3422]: W0129 12:04:24.888681 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.888940 kubelet[3422]: E0129 12:04:24.888719 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.888940 kubelet[3422]: I0129 12:04:24.888765 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f476b9f4-bc43-439e-8ffa-3de6f582cec3-kubelet-dir\") pod \"csi-node-driver-7gvnp\" (UID: \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\") " pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:24.892916 kubelet[3422]: E0129 12:04:24.889414 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.892916 kubelet[3422]: W0129 12:04:24.889446 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.892916 kubelet[3422]: E0129 12:04:24.889509 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.892916 kubelet[3422]: I0129 12:04:24.889581 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqpw\" (UniqueName: \"kubernetes.io/projected/f476b9f4-bc43-439e-8ffa-3de6f582cec3-kube-api-access-wgqpw\") pod \"csi-node-driver-7gvnp\" (UID: \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\") " pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:24.892916 kubelet[3422]: E0129 12:04:24.891796 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.892916 kubelet[3422]: W0129 12:04:24.891863 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.892916 kubelet[3422]: E0129 12:04:24.892021 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.892916 kubelet[3422]: I0129 12:04:24.892099 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f476b9f4-bc43-439e-8ffa-3de6f582cec3-varrun\") pod \"csi-node-driver-7gvnp\" (UID: \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\") " pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:24.892916 kubelet[3422]: E0129 12:04:24.892522 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.893579 kubelet[3422]: W0129 12:04:24.892551 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.893579 kubelet[3422]: E0129 12:04:24.892668 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.893579 kubelet[3422]: E0129 12:04:24.893143 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.893579 kubelet[3422]: W0129 12:04:24.893172 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.893802 kubelet[3422]: E0129 12:04:24.893676 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.893802 kubelet[3422]: W0129 12:04:24.893703 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.899253 kubelet[3422]: E0129 12:04:24.894319 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.899253 kubelet[3422]: W0129 12:04:24.894363 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.899253 kubelet[3422]: E0129 12:04:24.894426 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.899253 kubelet[3422]: E0129 12:04:24.894455 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.899253 kubelet[3422]: E0129 12:04:24.894504 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.895682 systemd[1]: Started cri-containerd-6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7.scope - libcontainer container 6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7. Jan 29 12:04:24.901767 kubelet[3422]: E0129 12:04:24.899950 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.901767 kubelet[3422]: W0129 12:04:24.899992 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.901767 kubelet[3422]: E0129 12:04:24.900027 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.904603 kubelet[3422]: E0129 12:04:24.904286 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.904603 kubelet[3422]: W0129 12:04:24.904325 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.904603 kubelet[3422]: E0129 12:04:24.904360 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.908995 kubelet[3422]: E0129 12:04:24.906330 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.908995 kubelet[3422]: W0129 12:04:24.906371 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.908995 kubelet[3422]: E0129 12:04:24.906405 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.910000 kubelet[3422]: E0129 12:04:24.909957 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.912457 kubelet[3422]: W0129 12:04:24.912383 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.913594 kubelet[3422]: E0129 12:04:24.913473 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.993444 kubelet[3422]: E0129 12:04:24.993298 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.993710 kubelet[3422]: W0129 12:04:24.993674 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.993843 kubelet[3422]: E0129 12:04:24.993818 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:24.995193 kubelet[3422]: E0129 12:04:24.995075 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:24.996389 kubelet[3422]: W0129 12:04:24.996219 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:24.999289 kubelet[3422]: E0129 12:04:24.999240 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.002161 kubelet[3422]: E0129 12:04:25.000328 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.002161 kubelet[3422]: W0129 12:04:25.000373 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.002161 kubelet[3422]: E0129 12:04:25.000427 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.003751 kubelet[3422]: E0129 12:04:25.002529 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.003751 kubelet[3422]: W0129 12:04:25.002572 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.004216 kubelet[3422]: E0129 12:04:25.004167 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.004393 kubelet[3422]: E0129 12:04:25.004284 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.004526 kubelet[3422]: W0129 12:04:25.004492 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.004732 kubelet[3422]: E0129 12:04:25.004682 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.006855 kubelet[3422]: E0129 12:04:25.006687 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.006855 kubelet[3422]: W0129 12:04:25.006725 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.006855 kubelet[3422]: E0129 12:04:25.006811 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.008010 kubelet[3422]: E0129 12:04:25.007693 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.008010 kubelet[3422]: W0129 12:04:25.007736 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.008010 kubelet[3422]: E0129 12:04:25.007833 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.009600 kubelet[3422]: E0129 12:04:25.009013 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.009600 kubelet[3422]: W0129 12:04:25.009051 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.009600 kubelet[3422]: E0129 12:04:25.009138 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.010210 kubelet[3422]: E0129 12:04:25.010028 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.010210 kubelet[3422]: W0129 12:04:25.010063 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.010210 kubelet[3422]: E0129 12:04:25.010160 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.011664 kubelet[3422]: E0129 12:04:25.011303 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.011664 kubelet[3422]: W0129 12:04:25.011363 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.011664 kubelet[3422]: E0129 12:04:25.011523 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.012786 kubelet[3422]: E0129 12:04:25.012486 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.012786 kubelet[3422]: W0129 12:04:25.012522 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.012786 kubelet[3422]: E0129 12:04:25.012598 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.013912 kubelet[3422]: E0129 12:04:25.013681 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.013912 kubelet[3422]: W0129 12:04:25.013851 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.015434 kubelet[3422]: E0129 12:04:25.015025 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.016526 kubelet[3422]: E0129 12:04:25.016258 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.017682 kubelet[3422]: W0129 12:04:25.017059 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.017682 kubelet[3422]: E0129 12:04:25.017211 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.020332 kubelet[3422]: E0129 12:04:25.020284 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.020882 kubelet[3422]: W0129 12:04:25.020615 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.020882 kubelet[3422]: E0129 12:04:25.020793 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.022308 kubelet[3422]: E0129 12:04:25.021896 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.022308 kubelet[3422]: W0129 12:04:25.021939 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.022308 kubelet[3422]: E0129 12:04:25.022028 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.023743 kubelet[3422]: E0129 12:04:25.023418 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.023743 kubelet[3422]: W0129 12:04:25.023469 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.023743 kubelet[3422]: E0129 12:04:25.023650 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.025648 kubelet[3422]: E0129 12:04:25.025238 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.025648 kubelet[3422]: W0129 12:04:25.025311 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.025648 kubelet[3422]: E0129 12:04:25.025517 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.027920 kubelet[3422]: E0129 12:04:25.027527 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.027920 kubelet[3422]: W0129 12:04:25.027565 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.030077 kubelet[3422]: E0129 12:04:25.029396 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.030077 kubelet[3422]: E0129 12:04:25.029743 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.030077 kubelet[3422]: W0129 12:04:25.029762 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.030333 kubelet[3422]: E0129 12:04:25.030222 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.032551 kubelet[3422]: E0129 12:04:25.031318 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.032551 kubelet[3422]: W0129 12:04:25.031393 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.032551 kubelet[3422]: E0129 12:04:25.031518 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.035328 kubelet[3422]: E0129 12:04:25.035213 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.035831 kubelet[3422]: W0129 12:04:25.035755 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.036780 kubelet[3422]: E0129 12:04:25.036743 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.037398 kubelet[3422]: W0129 12:04:25.037031 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.038524 kubelet[3422]: E0129 12:04:25.037947 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.038524 kubelet[3422]: W0129 12:04:25.037990 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.038524 kubelet[3422]: E0129 12:04:25.038034 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.039021 kubelet[3422]: E0129 12:04:25.036798 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.039593 kubelet[3422]: E0129 12:04:25.039506 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.040508 kubelet[3422]: E0129 12:04:25.040437 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.040938 kubelet[3422]: W0129 12:04:25.040897 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.041585 kubelet[3422]: E0129 12:04:25.041102 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.042653 kubelet[3422]: E0129 12:04:25.042469 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.042653 kubelet[3422]: W0129 12:04:25.042530 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.042653 kubelet[3422]: E0129 12:04:25.042568 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.071980 kubelet[3422]: E0129 12:04:25.071675 3422 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:04:25.071980 kubelet[3422]: W0129 12:04:25.071785 3422 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:04:25.071980 kubelet[3422]: E0129 12:04:25.071859 3422 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:04:25.118100 containerd[2030]: time="2025-01-29T12:04:25.118037566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdbgk,Uid:2607ee26-daa8-4eca-99ad-e3d71a57d82f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\"" Jan 29 12:04:25.125521 containerd[2030]: time="2025-01-29T12:04:25.125406632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:04:25.280902 containerd[2030]: time="2025-01-29T12:04:25.280764746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c846d4555-4fttl,Uid:c55e3514-a60d-4066-8252-c11d3ccfb2ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc90a8709205f6ff52c5bbd804dbd1291954338998735e818ed45c30db336ff2\"" Jan 29 12:04:26.121181 kubelet[3422]: E0129 12:04:26.120897 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:26.396511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460187284.mount: Deactivated successfully. Jan 29 12:04:26.658608 containerd[2030]: time="2025-01-29T12:04:26.658528076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:26.661340 containerd[2030]: time="2025-01-29T12:04:26.661278696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 12:04:26.664343 containerd[2030]: time="2025-01-29T12:04:26.663975283Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:26.672176 containerd[2030]: time="2025-01-29T12:04:26.670501183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:26.673181 containerd[2030]: time="2025-01-29T12:04:26.673019839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.547308955s" Jan 29 12:04:26.673456 containerd[2030]: time="2025-01-29T12:04:26.673388055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 12:04:26.676815 containerd[2030]: time="2025-01-29T12:04:26.676715574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:04:26.687309 containerd[2030]: time="2025-01-29T12:04:26.687232921Z" level=info msg="CreateContainer within sandbox \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:04:26.737090 containerd[2030]: time="2025-01-29T12:04:26.735654738Z" level=info msg="CreateContainer within sandbox \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c\"" Jan 29 12:04:26.740285 containerd[2030]: time="2025-01-29T12:04:26.738382293Z" level=info msg="StartContainer for \"6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c\"" Jan 29 12:04:26.835612 systemd[1]: Started cri-containerd-6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c.scope - libcontainer container 6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c. Jan 29 12:04:26.946936 containerd[2030]: time="2025-01-29T12:04:26.946264257Z" level=info msg="StartContainer for \"6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c\" returns successfully" Jan 29 12:04:27.028680 systemd[1]: cri-containerd-6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c.scope: Deactivated successfully. Jan 29 12:04:27.241515 containerd[2030]: time="2025-01-29T12:04:27.241038925Z" level=info msg="shim disconnected" id=6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c namespace=k8s.io Jan 29 12:04:27.241515 containerd[2030]: time="2025-01-29T12:04:27.241160208Z" level=warning msg="cleaning up after shim disconnected" id=6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c namespace=k8s.io Jan 29 12:04:27.241515 containerd[2030]: time="2025-01-29T12:04:27.241183141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:27.326684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f6862c268c2eb4a344df2dffc6dad297218a5123ed331685ee7b489108c545c-rootfs.mount: Deactivated successfully. Jan 29 12:04:28.122596 kubelet[3422]: E0129 12:04:28.122090 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:28.919950 containerd[2030]: time="2025-01-29T12:04:28.919891344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:28.921399 containerd[2030]: time="2025-01-29T12:04:28.921294186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 29 12:04:28.922520 containerd[2030]: time="2025-01-29T12:04:28.922424812Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:28.926965 containerd[2030]: time="2025-01-29T12:04:28.926885609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:28.929774 containerd[2030]: time="2025-01-29T12:04:28.929719167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.252915126s" Jan 29 12:04:28.929911 containerd[2030]: time="2025-01-29T12:04:28.929777782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 12:04:28.937817 containerd[2030]: time="2025-01-29T12:04:28.937549750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:04:28.988932 containerd[2030]: time="2025-01-29T12:04:28.988010535Z" level=info msg="CreateContainer within sandbox \"fc90a8709205f6ff52c5bbd804dbd1291954338998735e818ed45c30db336ff2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:04:29.018928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655829339.mount: Deactivated successfully. Jan 29 12:04:29.023928 containerd[2030]: time="2025-01-29T12:04:29.020687037Z" level=info msg="CreateContainer within sandbox \"fc90a8709205f6ff52c5bbd804dbd1291954338998735e818ed45c30db336ff2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ad113def23fe80e5a4413f9bfe194aa59b2b6ff818f597ac2efb88126181577b\"" Jan 29 12:04:29.026396 containerd[2030]: time="2025-01-29T12:04:29.026140457Z" level=info msg="StartContainer for \"ad113def23fe80e5a4413f9bfe194aa59b2b6ff818f597ac2efb88126181577b\"" Jan 29 12:04:29.097473 systemd[1]: Started cri-containerd-ad113def23fe80e5a4413f9bfe194aa59b2b6ff818f597ac2efb88126181577b.scope - libcontainer container ad113def23fe80e5a4413f9bfe194aa59b2b6ff818f597ac2efb88126181577b. Jan 29 12:04:29.193083 containerd[2030]: time="2025-01-29T12:04:29.191886983Z" level=info msg="StartContainer for \"ad113def23fe80e5a4413f9bfe194aa59b2b6ff818f597ac2efb88126181577b\" returns successfully" Jan 29 12:04:30.121491 kubelet[3422]: E0129 12:04:30.121383 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:30.309830 kubelet[3422]: I0129 12:04:30.309778 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:32.122489 kubelet[3422]: E0129 12:04:32.121553 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:33.290461 containerd[2030]: time="2025-01-29T12:04:33.290395319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:33.296029 containerd[2030]: time="2025-01-29T12:04:33.295944404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 12:04:33.298611 containerd[2030]: time="2025-01-29T12:04:33.298481231Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:33.303379 containerd[2030]: time="2025-01-29T12:04:33.303273817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:33.305488 containerd[2030]: time="2025-01-29T12:04:33.305375562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.36769672s" Jan 29 12:04:33.305689 containerd[2030]: time="2025-01-29T12:04:33.305452419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 12:04:33.312092 containerd[2030]: time="2025-01-29T12:04:33.311747686Z" level=info msg="CreateContainer within sandbox \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:04:33.333198 containerd[2030]: time="2025-01-29T12:04:33.333032502Z" level=info msg="CreateContainer within sandbox \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10\"" Jan 29 12:04:33.335637 containerd[2030]: time="2025-01-29T12:04:33.334593737Z" level=info msg="StartContainer for \"39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10\"" Jan 29 12:04:33.400450 systemd[1]: Started cri-containerd-39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10.scope - libcontainer container 39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10. Jan 29 12:04:33.459738 containerd[2030]: time="2025-01-29T12:04:33.459640651Z" level=info msg="StartContainer for \"39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10\" returns successfully" Jan 29 12:04:34.121345 kubelet[3422]: E0129 12:04:34.121251 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:34.376738 kubelet[3422]: I0129 12:04:34.374608 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c846d4555-4fttl" podStartSLOduration=6.732301666 podStartE2EDuration="10.374576173s" podCreationTimestamp="2025-01-29 12:04:24 +0000 UTC" firstStartedPulling="2025-01-29 12:04:25.292457061 +0000 UTC m=+14.471836798" lastFinishedPulling="2025-01-29 12:04:28.934731568 +0000 UTC m=+18.114111305" observedRunningTime="2025-01-29 12:04:29.350343807 +0000 UTC m=+18.529723664" watchObservedRunningTime="2025-01-29 12:04:34.374576173 +0000 UTC m=+23.553955922" Jan 29 12:04:34.384868 containerd[2030]: time="2025-01-29T12:04:34.384781292Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:04:34.390043 systemd[1]: cri-containerd-39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10.scope: Deactivated successfully. Jan 29 12:04:34.438705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10-rootfs.mount: Deactivated successfully. Jan 29 12:04:34.465172 kubelet[3422]: I0129 12:04:34.464211 3422 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 12:04:34.555836 systemd[1]: Created slice kubepods-burstable-podf1fb5145_b832_443d_a73c_9930366e855b.slice - libcontainer container kubepods-burstable-podf1fb5145_b832_443d_a73c_9930366e855b.slice. Jan 29 12:04:34.592328 systemd[1]: Created slice kubepods-burstable-pod3a4dca7a_f99e_482f_ab15_aab50f3f08b4.slice - libcontainer container kubepods-burstable-pod3a4dca7a_f99e_482f_ab15_aab50f3f08b4.slice. Jan 29 12:04:34.609085 kubelet[3422]: I0129 12:04:34.608899 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1fb5145-b832-443d-a73c-9930366e855b-config-volume\") pod \"coredns-6f6b679f8f-bd2qq\" (UID: \"f1fb5145-b832-443d-a73c-9930366e855b\") " pod="kube-system/coredns-6f6b679f8f-bd2qq" Jan 29 12:04:34.609085 kubelet[3422]: I0129 12:04:34.609036 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz5st\" (UniqueName: \"kubernetes.io/projected/f1fb5145-b832-443d-a73c-9930366e855b-kube-api-access-rz5st\") pod \"coredns-6f6b679f8f-bd2qq\" (UID: \"f1fb5145-b832-443d-a73c-9930366e855b\") " pod="kube-system/coredns-6f6b679f8f-bd2qq" Jan 29 12:04:34.609085 kubelet[3422]: I0129 12:04:34.609104 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a4dca7a-f99e-482f-ab15-aab50f3f08b4-config-volume\") pod \"coredns-6f6b679f8f-xdkk9\" (UID: \"3a4dca7a-f99e-482f-ab15-aab50f3f08b4\") " pod="kube-system/coredns-6f6b679f8f-xdkk9" Jan 29 12:04:34.609583 kubelet[3422]: I0129 12:04:34.609286 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4ab877-770e-430d-93c1-736e084d31b2-tigera-ca-bundle\") pod \"calico-kube-controllers-77bfd45fb5-9sscj\" (UID: \"cc4ab877-770e-430d-93c1-736e084d31b2\") " pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" Jan 29 12:04:34.609583 kubelet[3422]: I0129 12:04:34.609345 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qwzz\" (UniqueName: \"kubernetes.io/projected/cc4ab877-770e-430d-93c1-736e084d31b2-kube-api-access-6qwzz\") pod \"calico-kube-controllers-77bfd45fb5-9sscj\" (UID: \"cc4ab877-770e-430d-93c1-736e084d31b2\") " pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" Jan 29 12:04:34.609583 kubelet[3422]: I0129 12:04:34.609402 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qnk\" (UniqueName: \"kubernetes.io/projected/3a4dca7a-f99e-482f-ab15-aab50f3f08b4-kube-api-access-46qnk\") pod \"coredns-6f6b679f8f-xdkk9\" (UID: \"3a4dca7a-f99e-482f-ab15-aab50f3f08b4\") " pod="kube-system/coredns-6f6b679f8f-xdkk9" Jan 29 12:04:34.635468 systemd[1]: Created slice kubepods-besteffort-podcc4ab877_770e_430d_93c1_736e084d31b2.slice - libcontainer container kubepods-besteffort-podcc4ab877_770e_430d_93c1_736e084d31b2.slice. Jan 29 12:04:34.665996 systemd[1]: Created slice kubepods-besteffort-pod3dde2c8b_21bf_4437_af52_497d435eac68.slice - libcontainer container kubepods-besteffort-pod3dde2c8b_21bf_4437_af52_497d435eac68.slice. Jan 29 12:04:34.683050 systemd[1]: Created slice kubepods-besteffort-podd3f787d2_5d81_447d_8cb4_e23656b7ddc2.slice - libcontainer container kubepods-besteffort-podd3f787d2_5d81_447d_8cb4_e23656b7ddc2.slice. Jan 29 12:04:34.712135 kubelet[3422]: I0129 12:04:34.710436 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3f787d2-5d81-447d-8cb4-e23656b7ddc2-calico-apiserver-certs\") pod \"calico-apiserver-5486c59b9d-tt29w\" (UID: \"d3f787d2-5d81-447d-8cb4-e23656b7ddc2\") " pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" Jan 29 12:04:34.712460 kubelet[3422]: I0129 12:04:34.712426 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzjs\" (UniqueName: \"kubernetes.io/projected/d3f787d2-5d81-447d-8cb4-e23656b7ddc2-kube-api-access-dxzjs\") pod \"calico-apiserver-5486c59b9d-tt29w\" (UID: \"d3f787d2-5d81-447d-8cb4-e23656b7ddc2\") " pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" Jan 29 12:04:34.712628 kubelet[3422]: I0129 12:04:34.712599 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8ds\" (UniqueName: \"kubernetes.io/projected/3dde2c8b-21bf-4437-af52-497d435eac68-kube-api-access-nl8ds\") pod \"calico-apiserver-5486c59b9d-zlzn4\" (UID: \"3dde2c8b-21bf-4437-af52-497d435eac68\") " pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" Jan 29 12:04:34.713899 kubelet[3422]: I0129 12:04:34.712855 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3dde2c8b-21bf-4437-af52-497d435eac68-calico-apiserver-certs\") pod \"calico-apiserver-5486c59b9d-zlzn4\" (UID: \"3dde2c8b-21bf-4437-af52-497d435eac68\") " pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" Jan 29 12:04:34.878844 containerd[2030]: time="2025-01-29T12:04:34.878687977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bd2qq,Uid:f1fb5145-b832-443d-a73c-9930366e855b,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:34.922227 containerd[2030]: time="2025-01-29T12:04:34.922083216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xdkk9,Uid:3a4dca7a-f99e-482f-ab15-aab50f3f08b4,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:34.948528 containerd[2030]: time="2025-01-29T12:04:34.948430532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bfd45fb5-9sscj,Uid:cc4ab877-770e-430d-93c1-736e084d31b2,Namespace:calico-system,Attempt:0,}" Jan 29 12:04:34.979101 containerd[2030]: time="2025-01-29T12:04:34.978892462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-zlzn4,Uid:3dde2c8b-21bf-4437-af52-497d435eac68,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:04:34.990836 containerd[2030]: time="2025-01-29T12:04:34.990544789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-tt29w,Uid:d3f787d2-5d81-447d-8cb4-e23656b7ddc2,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:04:35.734330 containerd[2030]: time="2025-01-29T12:04:35.734058020Z" level=info msg="shim disconnected" id=39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10 namespace=k8s.io Jan 29 12:04:35.734330 containerd[2030]: time="2025-01-29T12:04:35.734209480Z" level=warning msg="cleaning up after shim disconnected" id=39fa8059bd112de6c4583fc4ac5f432f0f3a6d316cb4122dd5f8bebb941ffa10 namespace=k8s.io Jan 29 12:04:35.734330 containerd[2030]: time="2025-01-29T12:04:35.734243675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:04:35.830039 kubelet[3422]: I0129 12:04:35.829378 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:36.106569 containerd[2030]: time="2025-01-29T12:04:36.105791026Z" level=error msg="Failed to destroy network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.108174 containerd[2030]: time="2025-01-29T12:04:36.107893154Z" level=error msg="encountered an error cleaning up failed sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.109135 containerd[2030]: time="2025-01-29T12:04:36.108912500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bd2qq,Uid:f1fb5145-b832-443d-a73c-9930366e855b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.111209 kubelet[3422]: E0129 12:04:36.109346 3422 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.111209 kubelet[3422]: E0129 12:04:36.109448 3422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bd2qq" Jan 29 12:04:36.111209 kubelet[3422]: E0129 12:04:36.109482 3422 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bd2qq" Jan 29 12:04:36.111537 kubelet[3422]: E0129 12:04:36.109564 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bd2qq_kube-system(f1fb5145-b832-443d-a73c-9930366e855b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bd2qq_kube-system(f1fb5145-b832-443d-a73c-9930366e855b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bd2qq" podUID="f1fb5145-b832-443d-a73c-9930366e855b" Jan 29 12:04:36.154663 systemd[1]: Created slice kubepods-besteffort-podf476b9f4_bc43_439e_8ffa_3de6f582cec3.slice - libcontainer container kubepods-besteffort-podf476b9f4_bc43_439e_8ffa_3de6f582cec3.slice. Jan 29 12:04:36.163458 containerd[2030]: time="2025-01-29T12:04:36.162852265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gvnp,Uid:f476b9f4-bc43-439e-8ffa-3de6f582cec3,Namespace:calico-system,Attempt:0,}" Jan 29 12:04:36.298393 containerd[2030]: time="2025-01-29T12:04:36.298299080Z" level=error msg="Failed to destroy network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.299148 containerd[2030]: time="2025-01-29T12:04:36.299049797Z" level=error msg="encountered an error cleaning up failed sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.300733 containerd[2030]: time="2025-01-29T12:04:36.300441221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-tt29w,Uid:d3f787d2-5d81-447d-8cb4-e23656b7ddc2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.302281 kubelet[3422]: E0129 12:04:36.302209 3422 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.302454 kubelet[3422]: E0129 12:04:36.302326 3422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" Jan 29 12:04:36.302454 kubelet[3422]: E0129 12:04:36.302385 3422 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" Jan 29 12:04:36.302565 kubelet[3422]: E0129 12:04:36.302518 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5486c59b9d-tt29w_calico-apiserver(d3f787d2-5d81-447d-8cb4-e23656b7ddc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5486c59b9d-tt29w_calico-apiserver(d3f787d2-5d81-447d-8cb4-e23656b7ddc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" podUID="d3f787d2-5d81-447d-8cb4-e23656b7ddc2" Jan 29 12:04:36.311711 containerd[2030]: time="2025-01-29T12:04:36.311531341Z" level=error msg="Failed to destroy network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.312801 containerd[2030]: time="2025-01-29T12:04:36.312740564Z" level=error msg="encountered an error cleaning up failed sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.313729 containerd[2030]: time="2025-01-29T12:04:36.313668528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bfd45fb5-9sscj,Uid:cc4ab877-770e-430d-93c1-736e084d31b2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.314377 kubelet[3422]: E0129 12:04:36.314315 3422 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.314503 kubelet[3422]: E0129 12:04:36.314398 3422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" Jan 29 12:04:36.314503 kubelet[3422]: E0129 12:04:36.314431 3422 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" Jan 29 12:04:36.314882 kubelet[3422]: E0129 12:04:36.314680 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77bfd45fb5-9sscj_calico-system(cc4ab877-770e-430d-93c1-736e084d31b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77bfd45fb5-9sscj_calico-system(cc4ab877-770e-430d-93c1-736e084d31b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" podUID="cc4ab877-770e-430d-93c1-736e084d31b2" Jan 29 12:04:36.318340 containerd[2030]: time="2025-01-29T12:04:36.318187795Z" level=error msg="Failed to destroy network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.320630 containerd[2030]: time="2025-01-29T12:04:36.320371650Z" level=error msg="encountered an error cleaning up failed sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.320630 containerd[2030]: time="2025-01-29T12:04:36.320473336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-zlzn4,Uid:3dde2c8b-21bf-4437-af52-497d435eac68,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.321480 kubelet[3422]: E0129 12:04:36.321048 3422 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.321480 kubelet[3422]: E0129 12:04:36.321148 3422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" Jan 29 12:04:36.321480 kubelet[3422]: E0129 12:04:36.321185 3422 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" Jan 29 12:04:36.321721 kubelet[3422]: E0129 12:04:36.321264 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5486c59b9d-zlzn4_calico-apiserver(3dde2c8b-21bf-4437-af52-497d435eac68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5486c59b9d-zlzn4_calico-apiserver(3dde2c8b-21bf-4437-af52-497d435eac68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" podUID="3dde2c8b-21bf-4437-af52-497d435eac68" Jan 29 12:04:36.338189 containerd[2030]: time="2025-01-29T12:04:36.337734417Z" level=error msg="Failed to destroy network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.339245 containerd[2030]: time="2025-01-29T12:04:36.339186518Z" level=error msg="encountered an error cleaning up failed sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.339853 containerd[2030]: time="2025-01-29T12:04:36.339787946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xdkk9,Uid:3a4dca7a-f99e-482f-ab15-aab50f3f08b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.340104 kubelet[3422]: E0129 12:04:36.340042 3422 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.340244 kubelet[3422]: E0129 12:04:36.340143 3422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xdkk9" Jan 29 12:04:36.340244 kubelet[3422]: E0129 12:04:36.340178 3422 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xdkk9" Jan 29 12:04:36.340377 kubelet[3422]: E0129 12:04:36.340244 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xdkk9_kube-system(3a4dca7a-f99e-482f-ab15-aab50f3f08b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xdkk9_kube-system(3a4dca7a-f99e-482f-ab15-aab50f3f08b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xdkk9" podUID="3a4dca7a-f99e-482f-ab15-aab50f3f08b4" Jan 29 12:04:36.342144 kubelet[3422]: I0129 12:04:36.341712 3422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:04:36.344036 containerd[2030]: time="2025-01-29T12:04:36.343979393Z" level=info msg="StopPodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\"" Jan 29 12:04:36.346474 containerd[2030]: time="2025-01-29T12:04:36.346378361Z" level=info msg="Ensure that sandbox a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0 in task-service has been cleanup successfully" Jan 29 12:04:36.362163 kubelet[3422]: I0129 12:04:36.359725 3422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:04:36.364504 containerd[2030]: time="2025-01-29T12:04:36.363422111Z" level=info msg="StopPodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\"" Jan 29 12:04:36.364647 containerd[2030]: time="2025-01-29T12:04:36.364590302Z" level=info msg="Ensure that sandbox 779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1 in task-service has been cleanup successfully" Jan 29 12:04:36.381650 containerd[2030]: time="2025-01-29T12:04:36.381580151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:04:36.396256 kubelet[3422]: I0129 12:04:36.396169 3422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:04:36.401044 containerd[2030]: time="2025-01-29T12:04:36.399408333Z" level=info msg="StopPodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\"" Jan 29 12:04:36.401044 containerd[2030]: time="2025-01-29T12:04:36.399702401Z" level=info msg="Ensure that sandbox a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b in task-service has been cleanup successfully" Jan 29 12:04:36.410348 kubelet[3422]: I0129 12:04:36.410297 3422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:04:36.415754 containerd[2030]: time="2025-01-29T12:04:36.415681984Z" level=info msg="StopPodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\"" Jan 29 12:04:36.417770 containerd[2030]: time="2025-01-29T12:04:36.415977576Z" level=info msg="Ensure that sandbox b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef in task-service has been cleanup successfully" Jan 29 12:04:36.447898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0-shm.mount: Deactivated successfully. Jan 29 12:04:36.448839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215-shm.mount: Deactivated successfully. Jan 29 12:04:36.448994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef-shm.mount: Deactivated successfully. Jan 29 12:04:36.516839 containerd[2030]: time="2025-01-29T12:04:36.515615716Z" level=error msg="Failed to destroy network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.523569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e-shm.mount: Deactivated successfully. Jan 29 12:04:36.527264 containerd[2030]: time="2025-01-29T12:04:36.525045327Z" level=error msg="encountered an error cleaning up failed sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.527450 containerd[2030]: time="2025-01-29T12:04:36.527336733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gvnp,Uid:f476b9f4-bc43-439e-8ffa-3de6f582cec3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.529563 kubelet[3422]: E0129 12:04:36.529427 3422 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.530582 kubelet[3422]: E0129 12:04:36.529599 3422 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:36.530582 kubelet[3422]: E0129 12:04:36.529644 3422 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gvnp" Jan 29 12:04:36.530582 kubelet[3422]: E0129 12:04:36.529727 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7gvnp_calico-system(f476b9f4-bc43-439e-8ffa-3de6f582cec3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7gvnp_calico-system(f476b9f4-bc43-439e-8ffa-3de6f582cec3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:36.589488 containerd[2030]: time="2025-01-29T12:04:36.589403668Z" level=error msg="StopPodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" failed" error="failed to destroy network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.591285 kubelet[3422]: E0129 12:04:36.590399 3422 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:04:36.591285 kubelet[3422]: E0129 12:04:36.590559 3422 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0"} Jan 29 12:04:36.591285 kubelet[3422]: E0129 12:04:36.590688 3422 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3f787d2-5d81-447d-8cb4-e23656b7ddc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:04:36.591285 kubelet[3422]: E0129 12:04:36.590739 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3f787d2-5d81-447d-8cb4-e23656b7ddc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" podUID="d3f787d2-5d81-447d-8cb4-e23656b7ddc2" Jan 29 12:04:36.603955 containerd[2030]: time="2025-01-29T12:04:36.603665781Z" level=error msg="StopPodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" failed" error="failed to destroy network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.605159 kubelet[3422]: E0129 12:04:36.604505 3422 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:04:36.605159 kubelet[3422]: E0129 12:04:36.604596 3422 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1"} Jan 29 12:04:36.605159 kubelet[3422]: E0129 12:04:36.604660 3422 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3dde2c8b-21bf-4437-af52-497d435eac68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:04:36.605159 kubelet[3422]: E0129 12:04:36.604702 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3dde2c8b-21bf-4437-af52-497d435eac68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" podUID="3dde2c8b-21bf-4437-af52-497d435eac68" Jan 29 12:04:36.619185 containerd[2030]: time="2025-01-29T12:04:36.617075889Z" level=error msg="StopPodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" failed" error="failed to destroy network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.619336 kubelet[3422]: E0129 12:04:36.617657 3422 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:04:36.619336 kubelet[3422]: E0129 12:04:36.617754 3422 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b"} Jan 29 12:04:36.619336 kubelet[3422]: E0129 12:04:36.617862 3422 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc4ab877-770e-430d-93c1-736e084d31b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:04:36.619336 kubelet[3422]: E0129 12:04:36.617923 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc4ab877-770e-430d-93c1-736e084d31b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" podUID="cc4ab877-770e-430d-93c1-736e084d31b2" Jan 29 12:04:36.622386 containerd[2030]: time="2025-01-29T12:04:36.622231234Z" level=error msg="StopPodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" failed" error="failed to destroy network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:36.622928 kubelet[3422]: E0129 12:04:36.622843 3422 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:04:36.623035 kubelet[3422]: E0129 12:04:36.622934 3422 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef"} Jan 29 12:04:36.623035 kubelet[3422]: E0129 12:04:36.623002 3422 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1fb5145-b832-443d-a73c-9930366e855b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:04:36.623244 kubelet[3422]: E0129 12:04:36.623042 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1fb5145-b832-443d-a73c-9930366e855b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bd2qq" podUID="f1fb5145-b832-443d-a73c-9930366e855b" Jan 29 12:04:37.416576 kubelet[3422]: I0129 12:04:37.415413 3422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:04:37.418223 containerd[2030]: time="2025-01-29T12:04:37.417485478Z" level=info msg="StopPodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\"" Jan 29 12:04:37.418223 containerd[2030]: time="2025-01-29T12:04:37.417827044Z" level=info msg="Ensure that sandbox 39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e in task-service has been cleanup successfully" Jan 29 12:04:37.424455 kubelet[3422]: I0129 12:04:37.423473 3422 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:04:37.426980 containerd[2030]: time="2025-01-29T12:04:37.426765500Z" level=info msg="StopPodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\"" Jan 29 12:04:37.427276 containerd[2030]: time="2025-01-29T12:04:37.427065230Z" level=info msg="Ensure that sandbox 91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215 in task-service has been cleanup successfully" Jan 29 12:04:37.504845 containerd[2030]: time="2025-01-29T12:04:37.504259418Z" level=error msg="StopPodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" failed" error="failed to destroy network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:37.506323 kubelet[3422]: E0129 12:04:37.504865 3422 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:04:37.506323 kubelet[3422]: E0129 12:04:37.504936 3422 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e"} Jan 29 12:04:37.506323 kubelet[3422]: E0129 12:04:37.504992 3422 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:04:37.506323 kubelet[3422]: E0129 12:04:37.505048 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f476b9f4-bc43-439e-8ffa-3de6f582cec3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gvnp" podUID="f476b9f4-bc43-439e-8ffa-3de6f582cec3" Jan 29 12:04:37.522465 containerd[2030]: time="2025-01-29T12:04:37.522332961Z" level=error msg="StopPodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" failed" error="failed to destroy network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:04:37.523346 kubelet[3422]: E0129 12:04:37.522898 3422 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:04:37.523346 kubelet[3422]: E0129 12:04:37.523017 3422 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215"} Jan 29 12:04:37.523346 kubelet[3422]: E0129 12:04:37.523102 3422 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a4dca7a-f99e-482f-ab15-aab50f3f08b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:04:37.523346 kubelet[3422]: E0129 12:04:37.523185 3422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a4dca7a-f99e-482f-ab15-aab50f3f08b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xdkk9" podUID="3a4dca7a-f99e-482f-ab15-aab50f3f08b4" Jan 29 12:04:43.267032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2741181660.mount: Deactivated successfully. Jan 29 12:04:43.350550 containerd[2030]: time="2025-01-29T12:04:43.350444373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:43.352403 containerd[2030]: time="2025-01-29T12:04:43.351965188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 12:04:43.353756 containerd[2030]: time="2025-01-29T12:04:43.353666201Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:43.357838 containerd[2030]: time="2025-01-29T12:04:43.357783741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:43.359168 containerd[2030]: time="2025-01-29T12:04:43.358719141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.97706795s" Jan 29 12:04:43.359168 containerd[2030]: time="2025-01-29T12:04:43.358788658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 12:04:43.391499 containerd[2030]: time="2025-01-29T12:04:43.390732447Z" level=info msg="CreateContainer within sandbox \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:04:43.417385 containerd[2030]: time="2025-01-29T12:04:43.417084812Z" level=info msg="CreateContainer within sandbox \"6819398b3d7275e866cc8d6ee172aa24cbbc9596a31b19d1d794eddc5a5f01f7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f687af8bbb0b4709bea9d328d3732865a66f52a5a37d30c67fa7defbbbb47b0a\"" Jan 29 12:04:43.419448 containerd[2030]: time="2025-01-29T12:04:43.419304254Z" level=info msg="StartContainer for \"f687af8bbb0b4709bea9d328d3732865a66f52a5a37d30c67fa7defbbbb47b0a\"" Jan 29 12:04:43.477426 systemd[1]: Started cri-containerd-f687af8bbb0b4709bea9d328d3732865a66f52a5a37d30c67fa7defbbbb47b0a.scope - libcontainer container f687af8bbb0b4709bea9d328d3732865a66f52a5a37d30c67fa7defbbbb47b0a. Jan 29 12:04:43.537016 containerd[2030]: time="2025-01-29T12:04:43.536854761Z" level=info msg="StartContainer for \"f687af8bbb0b4709bea9d328d3732865a66f52a5a37d30c67fa7defbbbb47b0a\" returns successfully" Jan 29 12:04:43.655876 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:04:43.656060 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:04:44.505723 kubelet[3422]: I0129 12:04:44.505545 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tdbgk" podStartSLOduration=2.26780497 podStartE2EDuration="20.505521964s" podCreationTimestamp="2025-01-29 12:04:24 +0000 UTC" firstStartedPulling="2025-01-29 12:04:25.123307214 +0000 UTC m=+14.302686963" lastFinishedPulling="2025-01-29 12:04:43.361024208 +0000 UTC m=+32.540403957" observedRunningTime="2025-01-29 12:04:44.504995824 +0000 UTC m=+33.684375609" watchObservedRunningTime="2025-01-29 12:04:44.505521964 +0000 UTC m=+33.684901713" Jan 29 12:04:45.985173 kernel: bpftool[4709]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:04:46.497126 systemd-networkd[1748]: vxlan.calico: Link UP Jan 29 12:04:46.497147 systemd-networkd[1748]: vxlan.calico: Gained carrier Jan 29 12:04:46.508396 (udev-worker)[4538]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:46.550021 (udev-worker)[4537]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:46.566141 (udev-worker)[4757]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:04:47.123468 containerd[2030]: time="2025-01-29T12:04:47.122892208Z" level=info msg="StopPodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\"" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.284 [INFO][4809] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.285 [INFO][4809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" iface="eth0" netns="/var/run/netns/cni-a8a7ef0f-ea6c-37eb-3dbe-3a584ad6e729" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.288 [INFO][4809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" iface="eth0" netns="/var/run/netns/cni-a8a7ef0f-ea6c-37eb-3dbe-3a584ad6e729" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.291 [INFO][4809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" iface="eth0" netns="/var/run/netns/cni-a8a7ef0f-ea6c-37eb-3dbe-3a584ad6e729" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.291 [INFO][4809] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.291 [INFO][4809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.348 [INFO][4817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.349 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.350 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.362 [WARNING][4817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.363 [INFO][4817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.365 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:47.376210 containerd[2030]: 2025-01-29 12:04:47.370 [INFO][4809] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:04:47.379012 containerd[2030]: time="2025-01-29T12:04:47.376269441Z" level=info msg="TearDown network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" successfully" Jan 29 12:04:47.379012 containerd[2030]: time="2025-01-29T12:04:47.376340062Z" level=info msg="StopPodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" returns successfully" Jan 29 12:04:47.382089 containerd[2030]: time="2025-01-29T12:04:47.381407875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-zlzn4,Uid:3dde2c8b-21bf-4437-af52-497d435eac68,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:04:47.380313 systemd[1]: run-netns-cni\x2da8a7ef0f\x2dea6c\x2d37eb\x2d3dbe\x2d3a584ad6e729.mount: Deactivated successfully. Jan 29 12:04:47.667700 systemd-networkd[1748]: cali90c0d15d209: Link UP Jan 29 12:04:47.672770 systemd-networkd[1748]: cali90c0d15d209: Gained carrier Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.497 [INFO][4824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0 calico-apiserver-5486c59b9d- calico-apiserver 3dde2c8b-21bf-4437-af52-497d435eac68 736 0 2025-01-29 12:04:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5486c59b9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-93 calico-apiserver-5486c59b9d-zlzn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90c0d15d209 [] []}} ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.497 [INFO][4824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.572 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" HandleID="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.598 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" HandleID="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003009e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-93", "pod":"calico-apiserver-5486c59b9d-zlzn4", "timestamp":"2025-01-29 12:04:47.572748193 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.598 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.598 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.598 [INFO][4834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.602 [INFO][4834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.611 [INFO][4834] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.620 [INFO][4834] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.624 [INFO][4834] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.630 [INFO][4834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.630 [INFO][4834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.633 [INFO][4834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770 Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.643 [INFO][4834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.656 [INFO][4834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.65/26] block=192.168.111.64/26 handle="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.656 [INFO][4834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.65/26] handle="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" host="ip-172-31-16-93" Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.656 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:47.724135 containerd[2030]: 2025-01-29 12:04:47.656 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.65/26] IPv6=[] ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" HandleID="k8s-pod-network.b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.725554 containerd[2030]: 2025-01-29 12:04:47.661 [INFO][4824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dde2c8b-21bf-4437-af52-497d435eac68", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"calico-apiserver-5486c59b9d-zlzn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90c0d15d209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:47.725554 containerd[2030]: 2025-01-29 12:04:47.661 [INFO][4824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.65/32] ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.725554 containerd[2030]: 2025-01-29 12:04:47.661 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90c0d15d209 ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.725554 containerd[2030]: 2025-01-29 12:04:47.668 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.725554 containerd[2030]: 2025-01-29 12:04:47.671 [INFO][4824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dde2c8b-21bf-4437-af52-497d435eac68", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770", Pod:"calico-apiserver-5486c59b9d-zlzn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90c0d15d209", MAC:"1e:f8:f2:10:05:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:47.725554 containerd[2030]: 2025-01-29 12:04:47.719 [INFO][4824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-zlzn4" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:04:47.802881 containerd[2030]: time="2025-01-29T12:04:47.802674229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:47.802881 containerd[2030]: time="2025-01-29T12:04:47.802803093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:47.805480 containerd[2030]: time="2025-01-29T12:04:47.802845780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:47.805480 containerd[2030]: time="2025-01-29T12:04:47.805371656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:47.884675 systemd[1]: run-containerd-runc-k8s.io-b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770-runc.Jp7Ijy.mount: Deactivated successfully. Jan 29 12:04:47.902030 systemd[1]: Started cri-containerd-b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770.scope - libcontainer container b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770. Jan 29 12:04:48.010170 containerd[2030]: time="2025-01-29T12:04:48.010072391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-zlzn4,Uid:3dde2c8b-21bf-4437-af52-497d435eac68,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770\"" Jan 29 12:04:48.014542 containerd[2030]: time="2025-01-29T12:04:48.014444264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:04:48.123887 containerd[2030]: time="2025-01-29T12:04:48.123231547Z" level=info msg="StopPodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\"" Jan 29 12:04:48.125432 containerd[2030]: time="2025-01-29T12:04:48.123258966Z" level=info msg="StopPodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\"" Jan 29 12:04:48.201273 systemd-networkd[1748]: vxlan.calico: Gained IPv6LL Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.251 [INFO][4918] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.253 [INFO][4918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" iface="eth0" netns="/var/run/netns/cni-a4785509-e216-91fa-7ff4-dbdd9b436563" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.258 [INFO][4918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" iface="eth0" netns="/var/run/netns/cni-a4785509-e216-91fa-7ff4-dbdd9b436563" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.258 [INFO][4918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" iface="eth0" netns="/var/run/netns/cni-a4785509-e216-91fa-7ff4-dbdd9b436563" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.258 [INFO][4918] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.258 [INFO][4918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.349 [INFO][4931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.351 [INFO][4931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.351 [INFO][4931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.369 [WARNING][4931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.369 [INFO][4931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.374 [INFO][4931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:48.391245 containerd[2030]: 2025-01-29 12:04:48.384 [INFO][4918] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:04:48.391245 containerd[2030]: time="2025-01-29T12:04:48.389084351Z" level=info msg="TearDown network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" successfully" Jan 29 12:04:48.391245 containerd[2030]: time="2025-01-29T12:04:48.389325167Z" level=info msg="StopPodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" returns successfully" Jan 29 12:04:48.398991 systemd[1]: run-netns-cni\x2da4785509\x2de216\x2d91fa\x2d7ff4\x2ddbdd9b436563.mount: Deactivated successfully. Jan 29 12:04:48.401719 containerd[2030]: time="2025-01-29T12:04:48.399412925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-tt29w,Uid:d3f787d2-5d81-447d-8cb4-e23656b7ddc2,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.281 [INFO][4919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.281 [INFO][4919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" iface="eth0" netns="/var/run/netns/cni-aa2a0cbd-aa71-8d2e-41cf-bf94c9af13f8" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.288 [INFO][4919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" iface="eth0" netns="/var/run/netns/cni-aa2a0cbd-aa71-8d2e-41cf-bf94c9af13f8" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.289 [INFO][4919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" iface="eth0" netns="/var/run/netns/cni-aa2a0cbd-aa71-8d2e-41cf-bf94c9af13f8" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.290 [INFO][4919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.290 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.354 [INFO][4935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.355 [INFO][4935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.374 [INFO][4935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.407 [WARNING][4935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.407 [INFO][4935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.411 [INFO][4935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:48.418047 containerd[2030]: 2025-01-29 12:04:48.413 [INFO][4919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:04:48.418047 containerd[2030]: time="2025-01-29T12:04:48.417774323Z" level=info msg="TearDown network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" successfully" Jan 29 12:04:48.418047 containerd[2030]: time="2025-01-29T12:04:48.417816434Z" level=info msg="StopPodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" returns successfully" Jan 29 12:04:48.422320 containerd[2030]: time="2025-01-29T12:04:48.421872661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bfd45fb5-9sscj,Uid:cc4ab877-770e-430d-93c1-736e084d31b2,Namespace:calico-system,Attempt:1,}" Jan 29 12:04:48.423449 systemd[1]: run-netns-cni\x2daa2a0cbd\x2daa71\x2d8d2e\x2d41cf\x2dbf94c9af13f8.mount: Deactivated successfully. Jan 29 12:04:48.775292 systemd-networkd[1748]: caliac7906697cf: Link UP Jan 29 12:04:48.776699 systemd-networkd[1748]: caliac7906697cf: Gained carrier Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.553 [INFO][4943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0 calico-apiserver-5486c59b9d- calico-apiserver d3f787d2-5d81-447d-8cb4-e23656b7ddc2 745 0 2025-01-29 12:04:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5486c59b9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-93 calico-apiserver-5486c59b9d-tt29w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac7906697cf [] []}} ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.553 [INFO][4943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.672 [INFO][4966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" HandleID="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.698 [INFO][4966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" HandleID="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c5480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-93", "pod":"calico-apiserver-5486c59b9d-tt29w", "timestamp":"2025-01-29 12:04:48.672495568 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.698 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.699 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.699 [INFO][4966] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.704 [INFO][4966] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.711 [INFO][4966] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.724 [INFO][4966] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.730 [INFO][4966] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.734 [INFO][4966] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.735 [INFO][4966] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.738 [INFO][4966] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960 Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.749 [INFO][4966] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.764 [INFO][4966] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.66/26] block=192.168.111.64/26 handle="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.764 [INFO][4966] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.66/26] handle="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" host="ip-172-31-16-93" Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.764 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:48.819722 containerd[2030]: 2025-01-29 12:04:48.764 [INFO][4966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.66/26] IPv6=[] ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" HandleID="k8s-pod-network.c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.821084 containerd[2030]: 2025-01-29 12:04:48.768 [INFO][4943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f787d2-5d81-447d-8cb4-e23656b7ddc2", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"calico-apiserver-5486c59b9d-tt29w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac7906697cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:48.821084 containerd[2030]: 2025-01-29 12:04:48.768 [INFO][4943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.66/32] ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.821084 containerd[2030]: 2025-01-29 12:04:48.768 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac7906697cf ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.821084 containerd[2030]: 2025-01-29 12:04:48.774 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.821084 containerd[2030]: 2025-01-29 12:04:48.777 [INFO][4943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f787d2-5d81-447d-8cb4-e23656b7ddc2", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960", Pod:"calico-apiserver-5486c59b9d-tt29w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac7906697cf", MAC:"3e:cf:e5:9d:8e:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:48.821084 containerd[2030]: 2025-01-29 12:04:48.807 [INFO][4943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960" Namespace="calico-apiserver" Pod="calico-apiserver-5486c59b9d-tt29w" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:04:48.906679 containerd[2030]: time="2025-01-29T12:04:48.905594269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:48.906679 containerd[2030]: time="2025-01-29T12:04:48.905713621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:48.906679 containerd[2030]: time="2025-01-29T12:04:48.905742239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:48.906679 containerd[2030]: time="2025-01-29T12:04:48.905943330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:48.942630 systemd-networkd[1748]: calia43fa24ab11: Link UP Jan 29 12:04:48.951598 systemd-networkd[1748]: calia43fa24ab11: Gained carrier Jan 29 12:04:48.978576 systemd[1]: Started cri-containerd-c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960.scope - libcontainer container c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960. Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.585 [INFO][4952] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0 calico-kube-controllers-77bfd45fb5- calico-system cc4ab877-770e-430d-93c1-736e084d31b2 746 0 2025-01-29 12:04:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77bfd45fb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-93 calico-kube-controllers-77bfd45fb5-9sscj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia43fa24ab11 [] []}} ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.586 [INFO][4952] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.699 [INFO][4970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" HandleID="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.726 [INFO][4970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" HandleID="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003175f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-93", "pod":"calico-kube-controllers-77bfd45fb5-9sscj", "timestamp":"2025-01-29 12:04:48.699244191 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.726 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.764 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.764 [INFO][4970] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.816 [INFO][4970] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.833 [INFO][4970] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.848 [INFO][4970] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.854 [INFO][4970] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.862 [INFO][4970] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.862 [INFO][4970] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.867 [INFO][4970] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88 Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.884 [INFO][4970] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.908 [INFO][4970] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.67/26] block=192.168.111.64/26 handle="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.910 [INFO][4970] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.67/26] handle="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" host="ip-172-31-16-93" Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.912 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:49.003929 containerd[2030]: 2025-01-29 12:04:48.912 [INFO][4970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.67/26] IPv6=[] ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" HandleID="k8s-pod-network.8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.008088 containerd[2030]: 2025-01-29 12:04:48.923 [INFO][4952] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0", GenerateName:"calico-kube-controllers-77bfd45fb5-", Namespace:"calico-system", SelfLink:"", UID:"cc4ab877-770e-430d-93c1-736e084d31b2", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bfd45fb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"calico-kube-controllers-77bfd45fb5-9sscj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia43fa24ab11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:49.008088 containerd[2030]: 2025-01-29 12:04:48.924 [INFO][4952] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.67/32] ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.008088 containerd[2030]: 2025-01-29 12:04:48.924 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia43fa24ab11 ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.008088 containerd[2030]: 2025-01-29 12:04:48.956 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.008088 containerd[2030]: 2025-01-29 12:04:48.957 [INFO][4952] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0", GenerateName:"calico-kube-controllers-77bfd45fb5-", Namespace:"calico-system", SelfLink:"", UID:"cc4ab877-770e-430d-93c1-736e084d31b2", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bfd45fb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88", Pod:"calico-kube-controllers-77bfd45fb5-9sscj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia43fa24ab11", MAC:"76:9f:ba:ec:29:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:49.008088 containerd[2030]: 2025-01-29 12:04:48.994 [INFO][4952] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88" Namespace="calico-system" Pod="calico-kube-controllers-77bfd45fb5-9sscj" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:04:49.092242 containerd[2030]: time="2025-01-29T12:04:49.091511572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:49.092242 containerd[2030]: time="2025-01-29T12:04:49.091630133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:49.092242 containerd[2030]: time="2025-01-29T12:04:49.091667158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:49.092242 containerd[2030]: time="2025-01-29T12:04:49.091822924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:49.142782 containerd[2030]: time="2025-01-29T12:04:49.142590000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5486c59b9d-tt29w,Uid:d3f787d2-5d81-447d-8cb4-e23656b7ddc2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960\"" Jan 29 12:04:49.168440 systemd[1]: Started cri-containerd-8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88.scope - libcontainer container 8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88. Jan 29 12:04:49.236130 containerd[2030]: time="2025-01-29T12:04:49.236057180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bfd45fb5-9sscj,Uid:cc4ab877-770e-430d-93c1-736e084d31b2,Namespace:calico-system,Attempt:1,} returns sandbox id \"8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88\"" Jan 29 12:04:49.352614 systemd-networkd[1748]: cali90c0d15d209: Gained IPv6LL Jan 29 12:04:50.124807 containerd[2030]: time="2025-01-29T12:04:50.124268486Z" level=info msg="StopPodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\"" Jan 29 12:04:50.126985 containerd[2030]: time="2025-01-29T12:04:50.126912168Z" level=info msg="StopPodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\"" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.329 [INFO][5113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.333 [INFO][5113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" iface="eth0" netns="/var/run/netns/cni-1eb19e7a-1b28-2f0e-5156-aa25670773e8" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.335 [INFO][5113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" iface="eth0" netns="/var/run/netns/cni-1eb19e7a-1b28-2f0e-5156-aa25670773e8" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.336 [INFO][5113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" iface="eth0" netns="/var/run/netns/cni-1eb19e7a-1b28-2f0e-5156-aa25670773e8" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.336 [INFO][5113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.337 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.418 [INFO][5127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.418 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.419 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.449 [WARNING][5127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.449 [INFO][5127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.454 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:50.468133 containerd[2030]: 2025-01-29 12:04:50.457 [INFO][5113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:04:50.474532 containerd[2030]: time="2025-01-29T12:04:50.473138121Z" level=info msg="TearDown network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" successfully" Jan 29 12:04:50.474532 containerd[2030]: time="2025-01-29T12:04:50.473195153Z" level=info msg="StopPodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" returns successfully" Jan 29 12:04:50.480633 systemd[1]: run-netns-cni\x2d1eb19e7a\x2d1b28\x2d2f0e\x2d5156\x2daa25670773e8.mount: Deactivated successfully. Jan 29 12:04:50.484930 containerd[2030]: time="2025-01-29T12:04:50.483436985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gvnp,Uid:f476b9f4-bc43-439e-8ffa-3de6f582cec3,Namespace:calico-system,Attempt:1,}" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.320 [INFO][5114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.321 [INFO][5114] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" iface="eth0" netns="/var/run/netns/cni-f715a7cc-2f79-a8c8-14af-8aba722db419" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.323 [INFO][5114] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" iface="eth0" netns="/var/run/netns/cni-f715a7cc-2f79-a8c8-14af-8aba722db419" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.324 [INFO][5114] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" iface="eth0" netns="/var/run/netns/cni-f715a7cc-2f79-a8c8-14af-8aba722db419" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.325 [INFO][5114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.325 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.444 [INFO][5126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.448 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.455 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.495 [WARNING][5126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.495 [INFO][5126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.500 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:50.515800 containerd[2030]: 2025-01-29 12:04:50.509 [INFO][5114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:04:50.521490 containerd[2030]: time="2025-01-29T12:04:50.519346026Z" level=info msg="TearDown network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" successfully" Jan 29 12:04:50.521490 containerd[2030]: time="2025-01-29T12:04:50.519405804Z" level=info msg="StopPodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" returns successfully" Jan 29 12:04:50.528431 containerd[2030]: time="2025-01-29T12:04:50.528214042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xdkk9,Uid:3a4dca7a-f99e-482f-ab15-aab50f3f08b4,Namespace:kube-system,Attempt:1,}" Jan 29 12:04:50.530751 systemd[1]: run-netns-cni\x2df715a7cc\x2d2f79\x2da8c8\x2d14af\x2d8aba722db419.mount: Deactivated successfully. Jan 29 12:04:50.696661 systemd-networkd[1748]: caliac7906697cf: Gained IPv6LL Jan 29 12:04:51.016756 systemd-networkd[1748]: calia43fa24ab11: Gained IPv6LL Jan 29 12:04:51.069630 systemd-networkd[1748]: calia78068cfcc3: Link UP Jan 29 12:04:51.074024 systemd-networkd[1748]: calia78068cfcc3: Gained carrier Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.719 [INFO][5139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0 csi-node-driver- calico-system f476b9f4-bc43-439e-8ffa-3de6f582cec3 762 0 2025-01-29 12:04:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-93 csi-node-driver-7gvnp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia78068cfcc3 [] []}} ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.719 [INFO][5139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.882 [INFO][5165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" HandleID="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.918 [INFO][5165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" HandleID="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000305a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-93", "pod":"csi-node-driver-7gvnp", "timestamp":"2025-01-29 12:04:50.881223898 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.919 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.920 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.921 [INFO][5165] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.930 [INFO][5165] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.950 [INFO][5165] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.966 [INFO][5165] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.972 [INFO][5165] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.978 [INFO][5165] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.978 [INFO][5165] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.983 [INFO][5165] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3 Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:50.995 [INFO][5165] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:51.021 [INFO][5165] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.68/26] block=192.168.111.64/26 handle="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:51.022 [INFO][5165] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.68/26] handle="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" host="ip-172-31-16-93" Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:51.022 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:51.138260 containerd[2030]: 2025-01-29 12:04:51.022 [INFO][5165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.68/26] IPv6=[] ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" HandleID="k8s-pod-network.acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.142235 containerd[2030]: 2025-01-29 12:04:51.035 [INFO][5139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f476b9f4-bc43-439e-8ffa-3de6f582cec3", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"csi-node-driver-7gvnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia78068cfcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:51.142235 containerd[2030]: 2025-01-29 12:04:51.037 [INFO][5139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.68/32] ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.142235 containerd[2030]: 2025-01-29 12:04:51.038 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia78068cfcc3 ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.142235 containerd[2030]: 2025-01-29 12:04:51.078 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.142235 containerd[2030]: 2025-01-29 12:04:51.080 [INFO][5139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f476b9f4-bc43-439e-8ffa-3de6f582cec3", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3", Pod:"csi-node-driver-7gvnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia78068cfcc3", MAC:"72:73:66:9b:d9:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:51.142235 containerd[2030]: 2025-01-29 12:04:51.121 [INFO][5139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3" Namespace="calico-system" Pod="csi-node-driver-7gvnp" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:04:51.215389 systemd-networkd[1748]: cali491ab3ed194: Link UP Jan 29 12:04:51.222345 systemd-networkd[1748]: cali491ab3ed194: Gained carrier Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:50.783 [INFO][5149] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0 coredns-6f6b679f8f- kube-system 3a4dca7a-f99e-482f-ab15-aab50f3f08b4 761 0 2025-01-29 12:04:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-93 coredns-6f6b679f8f-xdkk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali491ab3ed194 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:50.786 [INFO][5149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:50.932 [INFO][5170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" HandleID="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:50.969 [INFO][5170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" HandleID="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029b9d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-93", "pod":"coredns-6f6b679f8f-xdkk9", "timestamp":"2025-01-29 12:04:50.931898824 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:50.969 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.022 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.023 [INFO][5170] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.028 [INFO][5170] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.049 [INFO][5170] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.091 [INFO][5170] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.099 [INFO][5170] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.109 [INFO][5170] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.110 [INFO][5170] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.120 [INFO][5170] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.153 [INFO][5170] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.181 [INFO][5170] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.69/26] block=192.168.111.64/26 handle="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.183 [INFO][5170] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.69/26] handle="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" host="ip-172-31-16-93" Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.183 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:51.295198 containerd[2030]: 2025-01-29 12:04:51.183 [INFO][5170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.69/26] IPv6=[] ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" HandleID="k8s-pod-network.9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.300362 containerd[2030]: 2025-01-29 12:04:51.189 [INFO][5149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a4dca7a-f99e-482f-ab15-aab50f3f08b4", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"coredns-6f6b679f8f-xdkk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali491ab3ed194", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:51.300362 containerd[2030]: 2025-01-29 12:04:51.190 [INFO][5149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.69/32] ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.300362 containerd[2030]: 2025-01-29 12:04:51.190 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali491ab3ed194 ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.300362 containerd[2030]: 2025-01-29 12:04:51.233 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.300362 containerd[2030]: 2025-01-29 12:04:51.234 [INFO][5149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a4dca7a-f99e-482f-ab15-aab50f3f08b4", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c", Pod:"coredns-6f6b679f8f-xdkk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali491ab3ed194", MAC:"1a:eb:cd:c5:8d:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:51.300362 containerd[2030]: 2025-01-29 12:04:51.275 [INFO][5149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c" Namespace="kube-system" Pod="coredns-6f6b679f8f-xdkk9" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:04:51.311864 containerd[2030]: time="2025-01-29T12:04:51.310010382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:51.311864 containerd[2030]: time="2025-01-29T12:04:51.310098083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:51.311864 containerd[2030]: time="2025-01-29T12:04:51.310240739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.311864 containerd[2030]: time="2025-01-29T12:04:51.310446880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.379078 systemd[1]: Started cri-containerd-acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3.scope - libcontainer container acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3. Jan 29 12:04:51.430853 containerd[2030]: time="2025-01-29T12:04:51.430559198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:51.430853 containerd[2030]: time="2025-01-29T12:04:51.430656733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:51.430853 containerd[2030]: time="2025-01-29T12:04:51.430693471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.431752 containerd[2030]: time="2025-01-29T12:04:51.430855906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:51.489213 containerd[2030]: time="2025-01-29T12:04:51.488638392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gvnp,Uid:f476b9f4-bc43-439e-8ffa-3de6f582cec3,Namespace:calico-system,Attempt:1,} returns sandbox id \"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3\"" Jan 29 12:04:51.535449 systemd[1]: Started cri-containerd-9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c.scope - libcontainer container 9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c. Jan 29 12:04:51.663835 containerd[2030]: time="2025-01-29T12:04:51.663688196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xdkk9,Uid:3a4dca7a-f99e-482f-ab15-aab50f3f08b4,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c\"" Jan 29 12:04:51.669788 containerd[2030]: time="2025-01-29T12:04:51.668036057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:51.671516 containerd[2030]: time="2025-01-29T12:04:51.671367798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 12:04:51.675094 containerd[2030]: time="2025-01-29T12:04:51.675010364Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:51.689441 containerd[2030]: time="2025-01-29T12:04:51.689331752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:51.693088 containerd[2030]: time="2025-01-29T12:04:51.692972303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.677775463s" Jan 29 12:04:51.693088 containerd[2030]: time="2025-01-29T12:04:51.693070426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:04:51.706905 containerd[2030]: time="2025-01-29T12:04:51.705683292Z" level=info msg="CreateContainer within sandbox \"9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:04:51.709389 containerd[2030]: time="2025-01-29T12:04:51.709278326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:04:51.714012 containerd[2030]: time="2025-01-29T12:04:51.713730535Z" level=info msg="CreateContainer within sandbox \"b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:04:51.757255 containerd[2030]: time="2025-01-29T12:04:51.755437499Z" level=info msg="CreateContainer within sandbox \"b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ff3a22599d9a1bbcd364d496957277f9570a2846c41389e78ef801fbb04498a\"" Jan 29 12:04:51.763067 containerd[2030]: time="2025-01-29T12:04:51.762766984Z" level=info msg="StartContainer for \"8ff3a22599d9a1bbcd364d496957277f9570a2846c41389e78ef801fbb04498a\"" Jan 29 12:04:51.781341 containerd[2030]: time="2025-01-29T12:04:51.781252041Z" level=info msg="CreateContainer within sandbox \"9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11c024b9dc94413b6896362deef60faa440900cfbbf350947251b778db5c8ced\"" Jan 29 12:04:51.783606 containerd[2030]: time="2025-01-29T12:04:51.782921138Z" level=info msg="StartContainer for \"11c024b9dc94413b6896362deef60faa440900cfbbf350947251b778db5c8ced\"" Jan 29 12:04:51.858439 systemd[1]: Started cri-containerd-8ff3a22599d9a1bbcd364d496957277f9570a2846c41389e78ef801fbb04498a.scope - libcontainer container 8ff3a22599d9a1bbcd364d496957277f9570a2846c41389e78ef801fbb04498a. Jan 29 12:04:51.870464 systemd[1]: Started cri-containerd-11c024b9dc94413b6896362deef60faa440900cfbbf350947251b778db5c8ced.scope - libcontainer container 11c024b9dc94413b6896362deef60faa440900cfbbf350947251b778db5c8ced. Jan 29 12:04:51.935936 containerd[2030]: time="2025-01-29T12:04:51.935372639Z" level=info msg="StartContainer for \"11c024b9dc94413b6896362deef60faa440900cfbbf350947251b778db5c8ced\" returns successfully" Jan 29 12:04:52.041420 containerd[2030]: time="2025-01-29T12:04:52.039535396Z" level=info msg="StartContainer for \"8ff3a22599d9a1bbcd364d496957277f9570a2846c41389e78ef801fbb04498a\" returns successfully" Jan 29 12:04:52.071100 containerd[2030]: time="2025-01-29T12:04:52.071017295Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:52.075303 containerd[2030]: time="2025-01-29T12:04:52.075136166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:04:52.084165 containerd[2030]: time="2025-01-29T12:04:52.084043654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 374.483673ms" Jan 29 12:04:52.086530 containerd[2030]: time="2025-01-29T12:04:52.086190245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:04:52.091381 containerd[2030]: time="2025-01-29T12:04:52.090076481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:04:52.095886 containerd[2030]: time="2025-01-29T12:04:52.095793061Z" level=info msg="CreateContainer within sandbox \"c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:04:52.124586 containerd[2030]: time="2025-01-29T12:04:52.124503243Z" level=info msg="StopPodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\"" Jan 29 12:04:52.145149 containerd[2030]: time="2025-01-29T12:04:52.145017636Z" level=info msg="CreateContainer within sandbox \"c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ecf504d1230c28766add5328da8005af98b35b604763dc50a49842666bf330cc\"" Jan 29 12:04:52.148721 containerd[2030]: time="2025-01-29T12:04:52.148504772Z" level=info msg="StartContainer for \"ecf504d1230c28766add5328da8005af98b35b604763dc50a49842666bf330cc\"" Jan 29 12:04:52.232534 systemd-networkd[1748]: calia78068cfcc3: Gained IPv6LL Jan 29 12:04:52.314072 systemd[1]: Started cri-containerd-ecf504d1230c28766add5328da8005af98b35b604763dc50a49842666bf330cc.scope - libcontainer container ecf504d1230c28766add5328da8005af98b35b604763dc50a49842666bf330cc. Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.407 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.407 [INFO][5380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" iface="eth0" netns="/var/run/netns/cni-ac96744e-34fe-187f-740a-83785f2aab32" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.408 [INFO][5380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" iface="eth0" netns="/var/run/netns/cni-ac96744e-34fe-187f-740a-83785f2aab32" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.408 [INFO][5380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" iface="eth0" netns="/var/run/netns/cni-ac96744e-34fe-187f-740a-83785f2aab32" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.408 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.409 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.497 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.497 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.497 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.526 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.526 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.532 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:52.545195 containerd[2030]: 2025-01-29 12:04:52.536 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:04:52.545195 containerd[2030]: time="2025-01-29T12:04:52.542262252Z" level=info msg="TearDown network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" successfully" Jan 29 12:04:52.545195 containerd[2030]: time="2025-01-29T12:04:52.542326816Z" level=info msg="StopPodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" returns successfully" Jan 29 12:04:52.549952 systemd[1]: run-netns-cni\x2dac96744e\x2d34fe\x2d187f\x2d740a\x2d83785f2aab32.mount: Deactivated successfully. Jan 29 12:04:52.555688 systemd-networkd[1748]: cali491ab3ed194: Gained IPv6LL Jan 29 12:04:52.587554 containerd[2030]: time="2025-01-29T12:04:52.585752436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bd2qq,Uid:f1fb5145-b832-443d-a73c-9930366e855b,Namespace:kube-system,Attempt:1,}" Jan 29 12:04:52.613517 containerd[2030]: time="2025-01-29T12:04:52.613448189Z" level=info msg="StartContainer for \"ecf504d1230c28766add5328da8005af98b35b604763dc50a49842666bf330cc\" returns successfully" Jan 29 12:04:52.756481 kubelet[3422]: I0129 12:04:52.754939 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5486c59b9d-zlzn4" podStartSLOduration=26.063089798 podStartE2EDuration="29.754911111s" podCreationTimestamp="2025-01-29 12:04:23 +0000 UTC" firstStartedPulling="2025-01-29 12:04:48.013334447 +0000 UTC m=+37.192714196" lastFinishedPulling="2025-01-29 12:04:51.705155736 +0000 UTC m=+40.884535509" observedRunningTime="2025-01-29 12:04:52.751263364 +0000 UTC m=+41.930643185" watchObservedRunningTime="2025-01-29 12:04:52.754911111 +0000 UTC m=+41.934290860" Jan 29 12:04:52.809657 kubelet[3422]: I0129 12:04:52.808696 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xdkk9" podStartSLOduration=37.808665436 podStartE2EDuration="37.808665436s" podCreationTimestamp="2025-01-29 12:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:52.807179632 +0000 UTC m=+41.986559441" watchObservedRunningTime="2025-01-29 12:04:52.808665436 +0000 UTC m=+41.988045185" Jan 29 12:04:53.131765 systemd-networkd[1748]: calid25b9efd466: Link UP Jan 29 12:04:53.135641 systemd-networkd[1748]: calid25b9efd466: Gained carrier Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:52.879 [INFO][5438] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0 coredns-6f6b679f8f- kube-system f1fb5145-b832-443d-a73c-9930366e855b 783 0 2025-01-29 12:04:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-93 coredns-6f6b679f8f-bd2qq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid25b9efd466 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:52.880 [INFO][5438] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.002 [INFO][5452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" HandleID="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.034 [INFO][5452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" HandleID="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-93", "pod":"coredns-6f6b679f8f-bd2qq", "timestamp":"2025-01-29 12:04:53.002765598 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.034 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.034 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.035 [INFO][5452] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.040 [INFO][5452] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.052 [INFO][5452] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.063 [INFO][5452] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.068 [INFO][5452] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.074 [INFO][5452] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.075 [INFO][5452] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.080 [INFO][5452] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5 Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.092 [INFO][5452] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.109 [INFO][5452] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.70/26] block=192.168.111.64/26 handle="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.109 [INFO][5452] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.70/26] handle="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" host="ip-172-31-16-93" Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.110 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:53.202706 containerd[2030]: 2025-01-29 12:04:53.110 [INFO][5452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.70/26] IPv6=[] ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" HandleID="k8s-pod-network.6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.205321 containerd[2030]: 2025-01-29 12:04:53.116 [INFO][5438] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f1fb5145-b832-443d-a73c-9930366e855b", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"coredns-6f6b679f8f-bd2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25b9efd466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:53.205321 containerd[2030]: 2025-01-29 12:04:53.117 [INFO][5438] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.70/32] ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.205321 containerd[2030]: 2025-01-29 12:04:53.119 [INFO][5438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid25b9efd466 ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.205321 containerd[2030]: 2025-01-29 12:04:53.141 [INFO][5438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.205321 containerd[2030]: 2025-01-29 12:04:53.149 [INFO][5438] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f1fb5145-b832-443d-a73c-9930366e855b", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5", Pod:"coredns-6f6b679f8f-bd2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25b9efd466", MAC:"a2:fd:80:12:44:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:53.205321 containerd[2030]: 2025-01-29 12:04:53.198 [INFO][5438] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5" Namespace="kube-system" Pod="coredns-6f6b679f8f-bd2qq" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:04:53.286053 containerd[2030]: time="2025-01-29T12:04:53.285814788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:53.286053 containerd[2030]: time="2025-01-29T12:04:53.285923226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:53.286053 containerd[2030]: time="2025-01-29T12:04:53.285980533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:53.286991 containerd[2030]: time="2025-01-29T12:04:53.286170734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:53.364397 systemd[1]: Started cri-containerd-6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5.scope - libcontainer container 6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5. Jan 29 12:04:53.520922 containerd[2030]: time="2025-01-29T12:04:53.520695438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bd2qq,Uid:f1fb5145-b832-443d-a73c-9930366e855b,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5\"" Jan 29 12:04:53.541363 containerd[2030]: time="2025-01-29T12:04:53.541271696Z" level=info msg="CreateContainer within sandbox \"6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:04:53.600625 containerd[2030]: time="2025-01-29T12:04:53.600554704Z" level=info msg="CreateContainer within sandbox \"6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7793659e3c72f8764fbb07d9225355197d35d3ba4ef388944186f41378015ecf\"" Jan 29 12:04:53.618331 containerd[2030]: time="2025-01-29T12:04:53.618058100Z" level=info msg="StartContainer for \"7793659e3c72f8764fbb07d9225355197d35d3ba4ef388944186f41378015ecf\"" Jan 29 12:04:53.752727 kubelet[3422]: I0129 12:04:53.751478 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:53.789038 systemd[1]: Started cri-containerd-7793659e3c72f8764fbb07d9225355197d35d3ba4ef388944186f41378015ecf.scope - libcontainer container 7793659e3c72f8764fbb07d9225355197d35d3ba4ef388944186f41378015ecf. Jan 29 12:04:53.952997 containerd[2030]: time="2025-01-29T12:04:53.951489442Z" level=info msg="StartContainer for \"7793659e3c72f8764fbb07d9225355197d35d3ba4ef388944186f41378015ecf\" returns successfully" Jan 29 12:04:54.763238 kubelet[3422]: I0129 12:04:54.763169 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:54.790171 kubelet[3422]: I0129 12:04:54.787502 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5486c59b9d-tt29w" podStartSLOduration=28.849743563 podStartE2EDuration="31.787480276s" podCreationTimestamp="2025-01-29 12:04:23 +0000 UTC" firstStartedPulling="2025-01-29 12:04:49.15175849 +0000 UTC m=+38.331138239" lastFinishedPulling="2025-01-29 12:04:52.089495119 +0000 UTC m=+41.268874952" observedRunningTime="2025-01-29 12:04:53.79576178 +0000 UTC m=+42.975141553" watchObservedRunningTime="2025-01-29 12:04:54.787480276 +0000 UTC m=+43.966860013" Jan 29 12:04:54.790171 kubelet[3422]: I0129 12:04:54.788923 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bd2qq" podStartSLOduration=39.788900965 podStartE2EDuration="39.788900965s" podCreationTimestamp="2025-01-29 12:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:54.787266459 +0000 UTC m=+43.966646244" watchObservedRunningTime="2025-01-29 12:04:54.788900965 +0000 UTC m=+43.968280702" Jan 29 12:04:55.112457 systemd-networkd[1748]: calid25b9efd466: Gained IPv6LL Jan 29 12:04:57.030234 containerd[2030]: time="2025-01-29T12:04:57.029271962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:57.032028 containerd[2030]: time="2025-01-29T12:04:57.031951590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 12:04:57.036021 containerd[2030]: time="2025-01-29T12:04:57.035954011Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:57.046879 containerd[2030]: time="2025-01-29T12:04:57.046807886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:57.050280 containerd[2030]: time="2025-01-29T12:04:57.049984869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.9598195s" Jan 29 12:04:57.050522 containerd[2030]: time="2025-01-29T12:04:57.050481480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 12:04:57.053927 containerd[2030]: time="2025-01-29T12:04:57.053585815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:04:57.103663 containerd[2030]: time="2025-01-29T12:04:57.103607152Z" level=info msg="CreateContainer within sandbox \"8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:04:57.158025 containerd[2030]: time="2025-01-29T12:04:57.157957076Z" level=info msg="CreateContainer within sandbox \"8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b506e02d3819763b8037e96bee562de3d08ec20eacc7a1fa87eadaac7b3963bf\"" Jan 29 12:04:57.160690 containerd[2030]: time="2025-01-29T12:04:57.160615294Z" level=info msg="StartContainer for \"b506e02d3819763b8037e96bee562de3d08ec20eacc7a1fa87eadaac7b3963bf\"" Jan 29 12:04:57.255337 systemd[1]: Started cri-containerd-b506e02d3819763b8037e96bee562de3d08ec20eacc7a1fa87eadaac7b3963bf.scope - libcontainer container b506e02d3819763b8037e96bee562de3d08ec20eacc7a1fa87eadaac7b3963bf. Jan 29 12:04:57.531426 containerd[2030]: time="2025-01-29T12:04:57.531337745Z" level=info msg="StartContainer for \"b506e02d3819763b8037e96bee562de3d08ec20eacc7a1fa87eadaac7b3963bf\" returns successfully" Jan 29 12:04:58.060578 ntpd[1998]: Listen normally on 6 vxlan.calico 192.168.111.64:123 Jan 29 12:04:58.060732 ntpd[1998]: Listen normally on 7 vxlan.calico [fe80::6462:5cff:fed3:73ba%4]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 6 vxlan.calico 192.168.111.64:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 7 vxlan.calico [fe80::6462:5cff:fed3:73ba%4]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 8 cali90c0d15d209 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 9 caliac7906697cf [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 10 calia43fa24ab11 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 11 calia78068cfcc3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 12 cali491ab3ed194 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 12:04:58.061591 ntpd[1998]: 29 Jan 12:04:58 ntpd[1998]: Listen normally on 13 calid25b9efd466 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 12:04:58.060823 ntpd[1998]: Listen normally on 8 cali90c0d15d209 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 29 12:04:58.060908 ntpd[1998]: Listen normally on 9 caliac7906697cf [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 12:04:58.061004 ntpd[1998]: Listen normally on 10 calia43fa24ab11 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 12:04:58.061099 ntpd[1998]: Listen normally on 11 calia78068cfcc3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 12:04:58.061346 ntpd[1998]: Listen normally on 12 cali491ab3ed194 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 12:04:58.061470 ntpd[1998]: Listen normally on 13 calid25b9efd466 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 12:04:58.535747 containerd[2030]: time="2025-01-29T12:04:58.535667428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:58.538804 containerd[2030]: time="2025-01-29T12:04:58.538657580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 12:04:58.542144 containerd[2030]: time="2025-01-29T12:04:58.540886461Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:58.547648 containerd[2030]: time="2025-01-29T12:04:58.547577590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:58.550364 containerd[2030]: time="2025-01-29T12:04:58.549809374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.496150503s" Jan 29 12:04:58.550364 containerd[2030]: time="2025-01-29T12:04:58.549896978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 12:04:58.562568 containerd[2030]: time="2025-01-29T12:04:58.562495595Z" level=info msg="CreateContainer within sandbox \"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:04:58.597541 containerd[2030]: time="2025-01-29T12:04:58.597439755Z" level=info msg="CreateContainer within sandbox \"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"23116ff9d2ddf35fcf19b445c716a5407e692a728c5915cb7813a19f1fc08de4\"" Jan 29 12:04:58.599314 containerd[2030]: time="2025-01-29T12:04:58.599233110Z" level=info msg="StartContainer for \"23116ff9d2ddf35fcf19b445c716a5407e692a728c5915cb7813a19f1fc08de4\"" Jan 29 12:04:58.708673 systemd[1]: Started cri-containerd-23116ff9d2ddf35fcf19b445c716a5407e692a728c5915cb7813a19f1fc08de4.scope - libcontainer container 23116ff9d2ddf35fcf19b445c716a5407e692a728c5915cb7813a19f1fc08de4. Jan 29 12:04:58.815188 containerd[2030]: time="2025-01-29T12:04:58.815004379Z" level=info msg="StartContainer for \"23116ff9d2ddf35fcf19b445c716a5407e692a728c5915cb7813a19f1fc08de4\" returns successfully" Jan 29 12:04:58.821498 kubelet[3422]: I0129 12:04:58.821427 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:58.827844 containerd[2030]: time="2025-01-29T12:04:58.827063290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:05:00.314618 systemd[1]: Started sshd@7-172.31.16.93:22-139.178.89.65:45368.service - OpenSSH per-connection server daemon (139.178.89.65:45368). Jan 29 12:05:00.484046 containerd[2030]: time="2025-01-29T12:05:00.483926509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:00.493064 containerd[2030]: time="2025-01-29T12:05:00.492946777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 12:05:00.499170 containerd[2030]: time="2025-01-29T12:05:00.499031729Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:00.518447 containerd[2030]: time="2025-01-29T12:05:00.518351856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:00.531595 containerd[2030]: time="2025-01-29T12:05:00.531474311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.704301432s" Jan 29 12:05:00.531595 containerd[2030]: time="2025-01-29T12:05:00.531589706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 12:05:00.539144 containerd[2030]: time="2025-01-29T12:05:00.539056055Z" level=info msg="CreateContainer within sandbox \"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:05:00.561764 sshd[5664]: Accepted publickey for core from 139.178.89.65 port 45368 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:00.579579 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:00.597767 containerd[2030]: time="2025-01-29T12:05:00.597692610Z" level=info msg="CreateContainer within sandbox \"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0c602b8e3c7b488ced4ad9ddd8d6aa98f3d6e8a0cf6febea56ca69bf4173fca3\"" Jan 29 12:05:00.602446 containerd[2030]: time="2025-01-29T12:05:00.602373448Z" level=info msg="StartContainer for \"0c602b8e3c7b488ced4ad9ddd8d6aa98f3d6e8a0cf6febea56ca69bf4173fca3\"" Jan 29 12:05:00.613133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863110523.mount: Deactivated successfully. Jan 29 12:05:00.646874 systemd-logind[2005]: New session 8 of user core. Jan 29 12:05:00.656419 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:05:00.752182 systemd[1]: run-containerd-runc-k8s.io-0c602b8e3c7b488ced4ad9ddd8d6aa98f3d6e8a0cf6febea56ca69bf4173fca3-runc.qyTm2d.mount: Deactivated successfully. Jan 29 12:05:00.772517 systemd[1]: Started cri-containerd-0c602b8e3c7b488ced4ad9ddd8d6aa98f3d6e8a0cf6febea56ca69bf4173fca3.scope - libcontainer container 0c602b8e3c7b488ced4ad9ddd8d6aa98f3d6e8a0cf6febea56ca69bf4173fca3. Jan 29 12:05:00.875623 containerd[2030]: time="2025-01-29T12:05:00.873928620Z" level=info msg="StartContainer for \"0c602b8e3c7b488ced4ad9ddd8d6aa98f3d6e8a0cf6febea56ca69bf4173fca3\" returns successfully" Jan 29 12:05:01.012255 sshd[5664]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:01.018239 systemd[1]: sshd@7-172.31.16.93:22-139.178.89.65:45368.service: Deactivated successfully. Jan 29 12:05:01.023090 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:05:01.028610 systemd-logind[2005]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:05:01.031680 systemd-logind[2005]: Removed session 8. Jan 29 12:05:01.302975 kubelet[3422]: I0129 12:05:01.302190 3422 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:05:01.302975 kubelet[3422]: I0129 12:05:01.302249 3422 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:05:01.894405 kubelet[3422]: I0129 12:05:01.894285 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77bfd45fb5-9sscj" podStartSLOduration=29.081612313 podStartE2EDuration="36.894261369s" podCreationTimestamp="2025-01-29 12:04:25 +0000 UTC" firstStartedPulling="2025-01-29 12:04:49.239103776 +0000 UTC m=+38.418483525" lastFinishedPulling="2025-01-29 12:04:57.051752844 +0000 UTC m=+46.231132581" observedRunningTime="2025-01-29 12:04:57.826912629 +0000 UTC m=+47.006292402" watchObservedRunningTime="2025-01-29 12:05:01.894261369 +0000 UTC m=+51.073641118" Jan 29 12:05:01.895155 kubelet[3422]: I0129 12:05:01.895006 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7gvnp" podStartSLOduration=28.857559170000002 podStartE2EDuration="37.894978238s" podCreationTimestamp="2025-01-29 12:04:24 +0000 UTC" firstStartedPulling="2025-01-29 12:04:51.495604208 +0000 UTC m=+40.674983969" lastFinishedPulling="2025-01-29 12:05:00.533023289 +0000 UTC m=+49.712403037" observedRunningTime="2025-01-29 12:05:01.894182113 +0000 UTC m=+51.073561886" watchObservedRunningTime="2025-01-29 12:05:01.894978238 +0000 UTC m=+51.074357975" Jan 29 12:05:04.369377 kubelet[3422]: I0129 12:05:04.368576 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:06.053978 systemd[1]: Started sshd@8-172.31.16.93:22-139.178.89.65:59068.service - OpenSSH per-connection server daemon (139.178.89.65:59068). Jan 29 12:05:06.247095 sshd[5785]: Accepted publickey for core from 139.178.89.65 port 59068 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:06.249938 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:06.261393 systemd-logind[2005]: New session 9 of user core. Jan 29 12:05:06.266374 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:05:06.528838 sshd[5785]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:06.534644 systemd-logind[2005]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:05:06.537321 systemd[1]: sshd@8-172.31.16.93:22-139.178.89.65:59068.service: Deactivated successfully. Jan 29 12:05:06.542359 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:05:06.545627 systemd-logind[2005]: Removed session 9. Jan 29 12:05:11.101466 containerd[2030]: time="2025-01-29T12:05:11.100867910Z" level=info msg="StopPodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\"" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.184 [WARNING][5815] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dde2c8b-21bf-4437-af52-497d435eac68", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770", Pod:"calico-apiserver-5486c59b9d-zlzn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90c0d15d209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.185 [INFO][5815] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.185 [INFO][5815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" iface="eth0" netns="" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.185 [INFO][5815] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.185 [INFO][5815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.221 [INFO][5824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.221 [INFO][5824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.221 [INFO][5824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.236 [WARNING][5824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.236 [INFO][5824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.244 [INFO][5824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:11.250771 containerd[2030]: 2025-01-29 12:05:11.247 [INFO][5815] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.251959 containerd[2030]: time="2025-01-29T12:05:11.251411717Z" level=info msg="TearDown network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" successfully" Jan 29 12:05:11.251959 containerd[2030]: time="2025-01-29T12:05:11.251450902Z" level=info msg="StopPodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" returns successfully" Jan 29 12:05:11.252960 containerd[2030]: time="2025-01-29T12:05:11.252872431Z" level=info msg="RemovePodSandbox for \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\"" Jan 29 12:05:11.253226 containerd[2030]: time="2025-01-29T12:05:11.252971213Z" level=info msg="Forcibly stopping sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\"" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.334 [WARNING][5843] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dde2c8b-21bf-4437-af52-497d435eac68", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"b4719085d77abcd578762236b5e4567243294d02737df2aae898b4d307818770", Pod:"calico-apiserver-5486c59b9d-zlzn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90c0d15d209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.335 [INFO][5843] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.335 [INFO][5843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" iface="eth0" netns="" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.335 [INFO][5843] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.335 [INFO][5843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.378 [INFO][5849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.379 [INFO][5849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.379 [INFO][5849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.393 [WARNING][5849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.393 [INFO][5849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" HandleID="k8s-pod-network.779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--zlzn4-eth0" Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.396 [INFO][5849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:11.401254 containerd[2030]: 2025-01-29 12:05:11.398 [INFO][5843] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1" Jan 29 12:05:11.401254 containerd[2030]: time="2025-01-29T12:05:11.401186841Z" level=info msg="TearDown network for sandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" successfully" Jan 29 12:05:11.409484 containerd[2030]: time="2025-01-29T12:05:11.409394503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:11.409657 containerd[2030]: time="2025-01-29T12:05:11.409523342Z" level=info msg="RemovePodSandbox \"779bc3003f453095c91054826548445c999c4d029f51f56ce5416367466d93b1\" returns successfully" Jan 29 12:05:11.411160 containerd[2030]: time="2025-01-29T12:05:11.410517573Z" level=info msg="StopPodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\"" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.491 [WARNING][5867] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a4dca7a-f99e-482f-ab15-aab50f3f08b4", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c", Pod:"coredns-6f6b679f8f-xdkk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali491ab3ed194", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.491 [INFO][5867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.492 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" iface="eth0" netns="" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.492 [INFO][5867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.492 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.530 [INFO][5873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.530 [INFO][5873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.530 [INFO][5873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.542 [WARNING][5873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.542 [INFO][5873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.545 [INFO][5873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:11.551859 containerd[2030]: 2025-01-29 12:05:11.548 [INFO][5867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.555297 containerd[2030]: time="2025-01-29T12:05:11.551941274Z" level=info msg="TearDown network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" successfully" Jan 29 12:05:11.555297 containerd[2030]: time="2025-01-29T12:05:11.551990246Z" level=info msg="StopPodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" returns successfully" Jan 29 12:05:11.555297 containerd[2030]: time="2025-01-29T12:05:11.552681040Z" level=info msg="RemovePodSandbox for \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\"" Jan 29 12:05:11.555297 containerd[2030]: time="2025-01-29T12:05:11.552735745Z" level=info msg="Forcibly stopping sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\"" Jan 29 12:05:11.569552 systemd[1]: Started sshd@9-172.31.16.93:22-139.178.89.65:48468.service - OpenSSH per-connection server daemon (139.178.89.65:48468). Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.640 [WARNING][5894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3a4dca7a-f99e-482f-ab15-aab50f3f08b4", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"9ca7f6ef2cae8d921001f24eaf8dfe3e73494d1c1d50fb0f07c7a37e5b2c0e0c", Pod:"coredns-6f6b679f8f-xdkk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali491ab3ed194", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.641 [INFO][5894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.641 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" iface="eth0" netns="" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.641 [INFO][5894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.641 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.693 [INFO][5901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.693 [INFO][5901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.694 [INFO][5901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.712 [WARNING][5901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.713 [INFO][5901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" HandleID="k8s-pod-network.91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--xdkk9-eth0" Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.716 [INFO][5901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:11.724378 containerd[2030]: 2025-01-29 12:05:11.720 [INFO][5894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215" Jan 29 12:05:11.725496 containerd[2030]: time="2025-01-29T12:05:11.724384578Z" level=info msg="TearDown network for sandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" successfully" Jan 29 12:05:11.733341 containerd[2030]: time="2025-01-29T12:05:11.733243286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:11.733494 containerd[2030]: time="2025-01-29T12:05:11.733356066Z" level=info msg="RemovePodSandbox \"91daf6485328aaf481e3674fba454a40cf1bac1054cdbbc0daacde5dcf41e215\" returns successfully" Jan 29 12:05:11.734415 containerd[2030]: time="2025-01-29T12:05:11.734351940Z" level=info msg="StopPodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\"" Jan 29 12:05:11.755036 sshd[5886]: Accepted publickey for core from 139.178.89.65 port 48468 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:11.758011 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:11.770900 systemd-logind[2005]: New session 10 of user core. Jan 29 12:05:11.781564 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.860 [WARNING][5920] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f1fb5145-b832-443d-a73c-9930366e855b", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5", Pod:"coredns-6f6b679f8f-bd2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25b9efd466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.862 [INFO][5920] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.862 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" iface="eth0" netns="" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.862 [INFO][5920] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.863 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.942 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.947 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.947 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.970 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.970 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.975 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:11.981434 containerd[2030]: 2025-01-29 12:05:11.978 [INFO][5920] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:11.981434 containerd[2030]: time="2025-01-29T12:05:11.981371717Z" level=info msg="TearDown network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" successfully" Jan 29 12:05:11.987826 containerd[2030]: time="2025-01-29T12:05:11.981436412Z" level=info msg="StopPodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" returns successfully" Jan 29 12:05:11.987826 containerd[2030]: time="2025-01-29T12:05:11.985462690Z" level=info msg="RemovePodSandbox for \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\"" Jan 29 12:05:11.987826 containerd[2030]: time="2025-01-29T12:05:11.985517143Z" level=info msg="Forcibly stopping sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\"" Jan 29 12:05:12.203212 sshd[5886]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:12.212730 systemd[1]: sshd@9-172.31.16.93:22-139.178.89.65:48468.service: Deactivated successfully. Jan 29 12:05:12.223614 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:05:12.252015 systemd-logind[2005]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.118 [WARNING][5954] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f1fb5145-b832-443d-a73c-9930366e855b", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"6ef89730d9d3fdaaf8a93c24dc6c980ade97fcded6e10fd9d53e8d1c39e59fc5", Pod:"coredns-6f6b679f8f-bd2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid25b9efd466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.119 [INFO][5954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.119 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" iface="eth0" netns="" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.119 [INFO][5954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.119 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.206 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.207 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.209 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.233 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.233 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" HandleID="k8s-pod-network.b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Workload="ip--172--31--16--93-k8s-coredns--6f6b679f8f--bd2qq-eth0" Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.237 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:12.257064 containerd[2030]: 2025-01-29 12:05:12.243 [INFO][5954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef" Jan 29 12:05:12.257064 containerd[2030]: time="2025-01-29T12:05:12.252634870Z" level=info msg="TearDown network for sandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" successfully" Jan 29 12:05:12.264679 containerd[2030]: time="2025-01-29T12:05:12.264234568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:12.264679 containerd[2030]: time="2025-01-29T12:05:12.264445926Z" level=info msg="RemovePodSandbox \"b2d312810759166af4b6e9de3935c587226c2076470333087defcd1e3126e0ef\" returns successfully" Jan 29 12:05:12.266756 systemd[1]: Started sshd@10-172.31.16.93:22-139.178.89.65:48470.service - OpenSSH per-connection server daemon (139.178.89.65:48470). Jan 29 12:05:12.269725 containerd[2030]: time="2025-01-29T12:05:12.268549973Z" level=info msg="StopPodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\"" Jan 29 12:05:12.271836 systemd-logind[2005]: Removed session 10. Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.347 [WARNING][5983] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f787d2-5d81-447d-8cb4-e23656b7ddc2", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960", Pod:"calico-apiserver-5486c59b9d-tt29w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac7906697cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.348 [INFO][5983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.348 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" iface="eth0" netns="" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.348 [INFO][5983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.348 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.389 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.389 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.389 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.401 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.401 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.403 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:12.408706 containerd[2030]: 2025-01-29 12:05:12.406 [INFO][5983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.408706 containerd[2030]: time="2025-01-29T12:05:12.408633440Z" level=info msg="TearDown network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" successfully" Jan 29 12:05:12.411199 containerd[2030]: time="2025-01-29T12:05:12.408671629Z" level=info msg="StopPodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" returns successfully" Jan 29 12:05:12.411199 containerd[2030]: time="2025-01-29T12:05:12.409906759Z" level=info msg="RemovePodSandbox for \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\"" Jan 29 12:05:12.411199 containerd[2030]: time="2025-01-29T12:05:12.410207701Z" level=info msg="Forcibly stopping sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\"" Jan 29 12:05:12.475298 sshd[5970]: Accepted publickey for core from 139.178.89.65 port 48470 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:12.479423 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:12.504302 systemd-logind[2005]: New session 11 of user core. Jan 29 12:05:12.509486 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.507 [WARNING][6009] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0", GenerateName:"calico-apiserver-5486c59b9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f787d2-5d81-447d-8cb4-e23656b7ddc2", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5486c59b9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"c684bbd9bdd68ea037e8d3fe0e536084fddb99ad1fc430551699b3d326a4c960", Pod:"calico-apiserver-5486c59b9d-tt29w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac7906697cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.508 [INFO][6009] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.508 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" iface="eth0" netns="" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.508 [INFO][6009] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.508 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.562 [INFO][6015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.562 [INFO][6015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.563 [INFO][6015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.577 [WARNING][6015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.577 [INFO][6015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" HandleID="k8s-pod-network.a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Workload="ip--172--31--16--93-k8s-calico--apiserver--5486c59b9d--tt29w-eth0" Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.579 [INFO][6015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:12.583756 containerd[2030]: 2025-01-29 12:05:12.581 [INFO][6009] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0" Jan 29 12:05:12.584770 containerd[2030]: time="2025-01-29T12:05:12.583832042Z" level=info msg="TearDown network for sandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" successfully" Jan 29 12:05:12.590173 containerd[2030]: time="2025-01-29T12:05:12.590069186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:12.590339 containerd[2030]: time="2025-01-29T12:05:12.590209491Z" level=info msg="RemovePodSandbox \"a5d6ab6b3c20ad2cc7200c28a561ed7faddb5e4c7a689eb12c152cc4171144e0\" returns successfully" Jan 29 12:05:12.590993 containerd[2030]: time="2025-01-29T12:05:12.590949305Z" level=info msg="StopPodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\"" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.685 [WARNING][6035] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f476b9f4-bc43-439e-8ffa-3de6f582cec3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3", Pod:"csi-node-driver-7gvnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia78068cfcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.685 [INFO][6035] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.685 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" iface="eth0" netns="" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.685 [INFO][6035] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.685 [INFO][6035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.733 [INFO][6047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.734 [INFO][6047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.734 [INFO][6047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.754 [WARNING][6047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.755 [INFO][6047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.760 [INFO][6047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:12.769191 containerd[2030]: 2025-01-29 12:05:12.764 [INFO][6035] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:12.769191 containerd[2030]: time="2025-01-29T12:05:12.768271864Z" level=info msg="TearDown network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" successfully" Jan 29 12:05:12.769191 containerd[2030]: time="2025-01-29T12:05:12.768337136Z" level=info msg="StopPodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" returns successfully" Jan 29 12:05:12.775760 containerd[2030]: time="2025-01-29T12:05:12.772511144Z" level=info msg="RemovePodSandbox for \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\"" Jan 29 12:05:12.775760 containerd[2030]: time="2025-01-29T12:05:12.772635701Z" level=info msg="Forcibly stopping sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\"" Jan 29 12:05:12.970081 sshd[5970]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:12.988657 systemd[1]: sshd@10-172.31.16.93:22-139.178.89.65:48470.service: Deactivated successfully. Jan 29 12:05:13.002440 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:05:13.005952 systemd-logind[2005]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:05:13.031984 systemd[1]: Started sshd@11-172.31.16.93:22-139.178.89.65:48482.service - OpenSSH per-connection server daemon (139.178.89.65:48482). Jan 29 12:05:13.039764 systemd-logind[2005]: Removed session 11. Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:12.939 [WARNING][6066] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f476b9f4-bc43-439e-8ffa-3de6f582cec3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"acb5f5f147f89e6738eb16289aeb210dcbe7fd43d15a6f5b08cc7fd32aa443d3", Pod:"csi-node-driver-7gvnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia78068cfcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:12.940 [INFO][6066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:12.940 [INFO][6066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" iface="eth0" netns="" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:12.940 [INFO][6066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:12.940 [INFO][6066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.070 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.071 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.071 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.094 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.094 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" HandleID="k8s-pod-network.39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Workload="ip--172--31--16--93-k8s-csi--node--driver--7gvnp-eth0" Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.097 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:13.102422 containerd[2030]: 2025-01-29 12:05:13.100 [INFO][6066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e" Jan 29 12:05:13.103780 containerd[2030]: time="2025-01-29T12:05:13.102443680Z" level=info msg="TearDown network for sandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" successfully" Jan 29 12:05:13.109362 containerd[2030]: time="2025-01-29T12:05:13.109268170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:13.109605 containerd[2030]: time="2025-01-29T12:05:13.109376440Z" level=info msg="RemovePodSandbox \"39f84246eced7592f24180d911f4fc18820867d3d39163b7e8253ff2a2dc4e4e\" returns successfully" Jan 29 12:05:13.110336 containerd[2030]: time="2025-01-29T12:05:13.110268913Z" level=info msg="StopPodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\"" Jan 29 12:05:13.257426 sshd[6079]: Accepted publickey for core from 139.178.89.65 port 48482 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:13.262751 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:13.277693 systemd-logind[2005]: New session 12 of user core. Jan 29 12:05:13.284608 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.224 [WARNING][6095] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0", GenerateName:"calico-kube-controllers-77bfd45fb5-", Namespace:"calico-system", SelfLink:"", UID:"cc4ab877-770e-430d-93c1-736e084d31b2", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bfd45fb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88", Pod:"calico-kube-controllers-77bfd45fb5-9sscj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia43fa24ab11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.224 [INFO][6095] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.225 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" iface="eth0" netns="" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.225 [INFO][6095] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.225 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.267 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.268 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.268 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.295 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.295 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.306 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:13.312769 containerd[2030]: 2025-01-29 12:05:13.309 [INFO][6095] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.312769 containerd[2030]: time="2025-01-29T12:05:13.312636563Z" level=info msg="TearDown network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" successfully" Jan 29 12:05:13.312769 containerd[2030]: time="2025-01-29T12:05:13.312679910Z" level=info msg="StopPodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" returns successfully" Jan 29 12:05:13.315162 containerd[2030]: time="2025-01-29T12:05:13.314300731Z" level=info msg="RemovePodSandbox for \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\"" Jan 29 12:05:13.315162 containerd[2030]: time="2025-01-29T12:05:13.314649996Z" level=info msg="Forcibly stopping sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\"" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.404 [WARNING][6120] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0", GenerateName:"calico-kube-controllers-77bfd45fb5-", Namespace:"calico-system", SelfLink:"", UID:"cc4ab877-770e-430d-93c1-736e084d31b2", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bfd45fb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"8fd3bc84bcf27218cc24923be408328076d0a7e2fefb96363c5853b755427b88", Pod:"calico-kube-controllers-77bfd45fb5-9sscj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia43fa24ab11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.405 [INFO][6120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.405 [INFO][6120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" iface="eth0" netns="" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.405 [INFO][6120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.406 [INFO][6120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.487 [INFO][6132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.489 [INFO][6132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.489 [INFO][6132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.519 [WARNING][6132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.519 [INFO][6132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" HandleID="k8s-pod-network.a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--77bfd45fb5--9sscj-eth0" Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.522 [INFO][6132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:05:13.531957 containerd[2030]: 2025-01-29 12:05:13.527 [INFO][6120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b" Jan 29 12:05:13.532858 containerd[2030]: time="2025-01-29T12:05:13.532040359Z" level=info msg="TearDown network for sandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" successfully" Jan 29 12:05:13.542604 containerd[2030]: time="2025-01-29T12:05:13.542229622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:05:13.542604 containerd[2030]: time="2025-01-29T12:05:13.542461874Z" level=info msg="RemovePodSandbox \"a5bbc841f52800d2067cedd6cb57c9025e2a009725bd6b64fdfc2018340f579b\" returns successfully" Jan 29 12:05:13.608351 sshd[6079]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:13.616433 systemd[1]: sshd@11-172.31.16.93:22-139.178.89.65:48482.service: Deactivated successfully. Jan 29 12:05:13.622674 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:05:13.625709 systemd-logind[2005]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:05:13.629892 systemd-logind[2005]: Removed session 12. Jan 29 12:05:18.648680 systemd[1]: Started sshd@12-172.31.16.93:22-139.178.89.65:48492.service - OpenSSH per-connection server daemon (139.178.89.65:48492). Jan 29 12:05:18.837534 sshd[6146]: Accepted publickey for core from 139.178.89.65 port 48492 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:18.840694 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:18.849989 systemd-logind[2005]: New session 13 of user core. Jan 29 12:05:18.858221 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:05:19.138946 sshd[6146]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:19.144687 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:05:19.150447 systemd[1]: sshd@12-172.31.16.93:22-139.178.89.65:48492.service: Deactivated successfully. Jan 29 12:05:19.156756 systemd-logind[2005]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:05:19.158811 systemd-logind[2005]: Removed session 13. Jan 29 12:05:23.115987 kubelet[3422]: I0129 12:05:23.115690 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:24.187890 systemd[1]: Started sshd@13-172.31.16.93:22-139.178.89.65:58998.service - OpenSSH per-connection server daemon (139.178.89.65:58998). Jan 29 12:05:24.384089 sshd[6165]: Accepted publickey for core from 139.178.89.65 port 58998 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:24.389717 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:24.404609 systemd-logind[2005]: New session 14 of user core. Jan 29 12:05:24.414603 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:05:24.766253 sshd[6165]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:24.775820 systemd[1]: sshd@13-172.31.16.93:22-139.178.89.65:58998.service: Deactivated successfully. Jan 29 12:05:24.781965 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:05:24.786856 systemd-logind[2005]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:05:24.789351 systemd-logind[2005]: Removed session 14. Jan 29 12:05:29.807743 systemd[1]: Started sshd@14-172.31.16.93:22-139.178.89.65:59012.service - OpenSSH per-connection server daemon (139.178.89.65:59012). Jan 29 12:05:30.002678 sshd[6185]: Accepted publickey for core from 139.178.89.65 port 59012 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:30.006717 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:30.017931 systemd-logind[2005]: New session 15 of user core. Jan 29 12:05:30.023466 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:05:30.177723 kubelet[3422]: I0129 12:05:30.177143 3422 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:30.347718 sshd[6185]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:30.360096 systemd[1]: sshd@14-172.31.16.93:22-139.178.89.65:59012.service: Deactivated successfully. Jan 29 12:05:30.366336 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:05:30.370264 systemd-logind[2005]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:05:30.373427 systemd-logind[2005]: Removed session 15. Jan 29 12:05:35.388283 systemd[1]: Started sshd@15-172.31.16.93:22-139.178.89.65:36534.service - OpenSSH per-connection server daemon (139.178.89.65:36534). Jan 29 12:05:35.585145 sshd[6245]: Accepted publickey for core from 139.178.89.65 port 36534 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:35.588961 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:35.602550 systemd-logind[2005]: New session 16 of user core. Jan 29 12:05:35.609470 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:05:35.864481 sshd[6245]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:35.871788 systemd[1]: sshd@15-172.31.16.93:22-139.178.89.65:36534.service: Deactivated successfully. Jan 29 12:05:35.876339 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:05:35.879601 systemd-logind[2005]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:05:35.882323 systemd-logind[2005]: Removed session 16. Jan 29 12:05:35.908613 systemd[1]: Started sshd@16-172.31.16.93:22-139.178.89.65:36548.service - OpenSSH per-connection server daemon (139.178.89.65:36548). Jan 29 12:05:36.077359 sshd[6260]: Accepted publickey for core from 139.178.89.65 port 36548 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:36.080801 sshd[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:36.090313 systemd-logind[2005]: New session 17 of user core. Jan 29 12:05:36.097494 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:05:36.601832 sshd[6260]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:36.609013 systemd[1]: sshd@16-172.31.16.93:22-139.178.89.65:36548.service: Deactivated successfully. Jan 29 12:05:36.614498 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:05:36.616502 systemd-logind[2005]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:05:36.620060 systemd-logind[2005]: Removed session 17. Jan 29 12:05:36.642765 systemd[1]: Started sshd@17-172.31.16.93:22-139.178.89.65:36558.service - OpenSSH per-connection server daemon (139.178.89.65:36558). Jan 29 12:05:36.837569 sshd[6271]: Accepted publickey for core from 139.178.89.65 port 36558 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:36.840646 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:36.852408 systemd-logind[2005]: New session 18 of user core. Jan 29 12:05:36.859540 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:05:40.457626 sshd[6271]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:40.469671 systemd[1]: sshd@17-172.31.16.93:22-139.178.89.65:36558.service: Deactivated successfully. Jan 29 12:05:40.470294 systemd-logind[2005]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:05:40.479672 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:05:40.481542 systemd[1]: session-18.scope: Consumed 1.105s CPU time. Jan 29 12:05:40.501677 systemd-logind[2005]: Removed session 18. Jan 29 12:05:40.521746 systemd[1]: Started sshd@18-172.31.16.93:22-139.178.89.65:36574.service - OpenSSH per-connection server daemon (139.178.89.65:36574). Jan 29 12:05:40.715701 sshd[6287]: Accepted publickey for core from 139.178.89.65 port 36574 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:40.719258 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:40.731308 systemd-logind[2005]: New session 19 of user core. Jan 29 12:05:40.738755 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:05:41.414873 sshd[6287]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:41.426540 systemd[1]: sshd@18-172.31.16.93:22-139.178.89.65:36574.service: Deactivated successfully. Jan 29 12:05:41.434565 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:05:41.438354 systemd-logind[2005]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:05:41.470483 systemd[1]: Started sshd@19-172.31.16.93:22-139.178.89.65:48396.service - OpenSSH per-connection server daemon (139.178.89.65:48396). Jan 29 12:05:41.475282 systemd-logind[2005]: Removed session 19. Jan 29 12:05:41.662822 sshd[6300]: Accepted publickey for core from 139.178.89.65 port 48396 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:41.667295 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:41.686519 systemd-logind[2005]: New session 20 of user core. Jan 29 12:05:41.692546 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:05:42.042691 sshd[6300]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:42.053975 systemd[1]: sshd@19-172.31.16.93:22-139.178.89.65:48396.service: Deactivated successfully. Jan 29 12:05:42.062822 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:05:42.067853 systemd-logind[2005]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:05:42.071244 systemd-logind[2005]: Removed session 20. Jan 29 12:05:47.089522 systemd[1]: Started sshd@20-172.31.16.93:22-139.178.89.65:48402.service - OpenSSH per-connection server daemon (139.178.89.65:48402). Jan 29 12:05:47.294178 sshd[6313]: Accepted publickey for core from 139.178.89.65 port 48402 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:47.298052 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:47.316298 systemd-logind[2005]: New session 21 of user core. Jan 29 12:05:47.322012 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:05:47.612613 sshd[6313]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:47.619578 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:05:47.622466 systemd[1]: sshd@20-172.31.16.93:22-139.178.89.65:48402.service: Deactivated successfully. Jan 29 12:05:47.629043 systemd-logind[2005]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:05:47.632083 systemd-logind[2005]: Removed session 21. Jan 29 12:05:52.647652 systemd[1]: Started sshd@21-172.31.16.93:22-139.178.89.65:50102.service - OpenSSH per-connection server daemon (139.178.89.65:50102). Jan 29 12:05:52.831969 sshd[6331]: Accepted publickey for core from 139.178.89.65 port 50102 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:52.834927 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:52.843046 systemd-logind[2005]: New session 22 of user core. Jan 29 12:05:52.855443 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:05:53.134511 sshd[6331]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:53.145490 systemd[1]: sshd@21-172.31.16.93:22-139.178.89.65:50102.service: Deactivated successfully. Jan 29 12:05:53.149631 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:05:53.152600 systemd-logind[2005]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:05:53.155654 systemd-logind[2005]: Removed session 22. Jan 29 12:05:58.174737 systemd[1]: Started sshd@22-172.31.16.93:22-139.178.89.65:50118.service - OpenSSH per-connection server daemon (139.178.89.65:50118). Jan 29 12:05:58.360967 sshd[6344]: Accepted publickey for core from 139.178.89.65 port 50118 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:05:58.364069 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:58.374485 systemd-logind[2005]: New session 23 of user core. Jan 29 12:05:58.383392 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:05:58.634092 sshd[6344]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:58.641508 systemd-logind[2005]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:05:58.642512 systemd[1]: sshd@22-172.31.16.93:22-139.178.89.65:50118.service: Deactivated successfully. Jan 29 12:05:58.647026 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:05:58.653044 systemd-logind[2005]: Removed session 23. Jan 29 12:06:03.676692 systemd[1]: Started sshd@23-172.31.16.93:22-139.178.89.65:42580.service - OpenSSH per-connection server daemon (139.178.89.65:42580). Jan 29 12:06:03.862316 sshd[6375]: Accepted publickey for core from 139.178.89.65 port 42580 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:06:03.865732 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:03.874806 systemd-logind[2005]: New session 24 of user core. Jan 29 12:06:03.884470 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:06:04.132980 sshd[6375]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:04.143403 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:06:04.146454 systemd[1]: sshd@23-172.31.16.93:22-139.178.89.65:42580.service: Deactivated successfully. Jan 29 12:06:04.156589 systemd-logind[2005]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:06:04.159605 systemd-logind[2005]: Removed session 24. Jan 29 12:06:09.170735 systemd[1]: Started sshd@24-172.31.16.93:22-139.178.89.65:42590.service - OpenSSH per-connection server daemon (139.178.89.65:42590). Jan 29 12:06:09.362987 sshd[6436]: Accepted publickey for core from 139.178.89.65 port 42590 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:06:09.368074 sshd[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:09.379910 systemd-logind[2005]: New session 25 of user core. Jan 29 12:06:09.390150 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:06:09.659776 sshd[6436]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:09.667966 systemd[1]: sshd@24-172.31.16.93:22-139.178.89.65:42590.service: Deactivated successfully. Jan 29 12:06:09.673690 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:06:09.675335 systemd-logind[2005]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:06:09.677806 systemd-logind[2005]: Removed session 25. Jan 29 12:06:14.700692 systemd[1]: Started sshd@25-172.31.16.93:22-139.178.89.65:40836.service - OpenSSH per-connection server daemon (139.178.89.65:40836). Jan 29 12:06:14.881354 sshd[6452]: Accepted publickey for core from 139.178.89.65 port 40836 ssh2: RSA SHA256:zyj+7xvouCnFuYEY7+Pc9LLFsh1PIFyDmS9T/NYYhrk Jan 29 12:06:14.884554 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:14.893725 systemd-logind[2005]: New session 26 of user core. Jan 29 12:06:14.905556 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:06:15.180229 sshd[6452]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:15.186983 systemd[1]: sshd@25-172.31.16.93:22-139.178.89.65:40836.service: Deactivated successfully. Jan 29 12:06:15.192669 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:06:15.194271 systemd-logind[2005]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:06:15.196735 systemd-logind[2005]: Removed session 26. Jan 29 12:06:28.693244 systemd[1]: cri-containerd-1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6.scope: Deactivated successfully. Jan 29 12:06:28.694683 systemd[1]: cri-containerd-1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6.scope: Consumed 6.807s CPU time. Jan 29 12:06:28.739985 containerd[2030]: time="2025-01-29T12:06:28.739648887Z" level=info msg="shim disconnected" id=1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6 namespace=k8s.io Jan 29 12:06:28.739985 containerd[2030]: time="2025-01-29T12:06:28.739737691Z" level=warning msg="cleaning up after shim disconnected" id=1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6 namespace=k8s.io Jan 29 12:06:28.739985 containerd[2030]: time="2025-01-29T12:06:28.739757529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:28.741464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6-rootfs.mount: Deactivated successfully. Jan 29 12:06:28.801430 containerd[2030]: time="2025-01-29T12:06:28.801359685Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:06:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:06:29.219901 kubelet[3422]: I0129 12:06:29.219853 3422 scope.go:117] "RemoveContainer" containerID="1df604139df47961c1323a514c628d082d9b5b72ae1925119736dbcd856763e6" Jan 29 12:06:29.224331 containerd[2030]: time="2025-01-29T12:06:29.224255820Z" level=info msg="CreateContainer within sandbox \"2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 29 12:06:29.254007 containerd[2030]: time="2025-01-29T12:06:29.253813426Z" level=info msg="CreateContainer within sandbox \"2bdec5b1a614edb7f957873fe59d4a4527363df620c0d5766e76a3c1b7075d42\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1cebf134f717698c72040ff7605f041d71ec4a2df5eda3dad7957576baf0353c\"" Jan 29 12:06:29.255482 containerd[2030]: time="2025-01-29T12:06:29.255078217Z" level=info msg="StartContainer for \"1cebf134f717698c72040ff7605f041d71ec4a2df5eda3dad7957576baf0353c\"" Jan 29 12:06:29.321450 systemd[1]: Started cri-containerd-1cebf134f717698c72040ff7605f041d71ec4a2df5eda3dad7957576baf0353c.scope - libcontainer container 1cebf134f717698c72040ff7605f041d71ec4a2df5eda3dad7957576baf0353c. Jan 29 12:06:29.380551 containerd[2030]: time="2025-01-29T12:06:29.380303602Z" level=info msg="StartContainer for \"1cebf134f717698c72040ff7605f041d71ec4a2df5eda3dad7957576baf0353c\" returns successfully" Jan 29 12:06:29.450376 systemd[1]: cri-containerd-34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515.scope: Deactivated successfully. Jan 29 12:06:29.450891 systemd[1]: cri-containerd-34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515.scope: Consumed 5.907s CPU time, 18.2M memory peak, 0B memory swap peak. Jan 29 12:06:29.517449 containerd[2030]: time="2025-01-29T12:06:29.516358956Z" level=info msg="shim disconnected" id=34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515 namespace=k8s.io Jan 29 12:06:29.517449 containerd[2030]: time="2025-01-29T12:06:29.516449439Z" level=warning msg="cleaning up after shim disconnected" id=34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515 namespace=k8s.io Jan 29 12:06:29.517449 containerd[2030]: time="2025-01-29T12:06:29.516471532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:29.740785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515-rootfs.mount: Deactivated successfully. Jan 29 12:06:30.231508 kubelet[3422]: I0129 12:06:30.231445 3422 scope.go:117] "RemoveContainer" containerID="34c5cde1bbab53475291425dbd10f66684d18d62c9adf796ddf4c48468bff515" Jan 29 12:06:30.236067 containerd[2030]: time="2025-01-29T12:06:30.236003024Z" level=info msg="CreateContainer within sandbox \"8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 12:06:30.267246 containerd[2030]: time="2025-01-29T12:06:30.266895443Z" level=info msg="CreateContainer within sandbox \"8f98de847ba34e7a4f1e275c47e6b98374c98964807f782274be1b9d4d7d2768\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d168e66d448cfc21c37372e2d974255e26bff7c179e1846a917e5221068e7214\"" Jan 29 12:06:30.268569 containerd[2030]: time="2025-01-29T12:06:30.268363520Z" level=info msg="StartContainer for \"d168e66d448cfc21c37372e2d974255e26bff7c179e1846a917e5221068e7214\"" Jan 29 12:06:30.331552 systemd[1]: Started cri-containerd-d168e66d448cfc21c37372e2d974255e26bff7c179e1846a917e5221068e7214.scope - libcontainer container d168e66d448cfc21c37372e2d974255e26bff7c179e1846a917e5221068e7214. Jan 29 12:06:30.424689 containerd[2030]: time="2025-01-29T12:06:30.424605173Z" level=info msg="StartContainer for \"d168e66d448cfc21c37372e2d974255e26bff7c179e1846a917e5221068e7214\" returns successfully" Jan 29 12:06:33.664518 kubelet[3422]: E0129 12:06:33.664177 3422 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 12:06:34.153552 systemd[1]: cri-containerd-86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3.scope: Deactivated successfully. Jan 29 12:06:34.154900 systemd[1]: cri-containerd-86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3.scope: Consumed 4.342s CPU time, 15.1M memory peak, 0B memory swap peak. Jan 29 12:06:34.203286 containerd[2030]: time="2025-01-29T12:06:34.203163904Z" level=info msg="shim disconnected" id=86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3 namespace=k8s.io Jan 29 12:06:34.204812 containerd[2030]: time="2025-01-29T12:06:34.203573031Z" level=warning msg="cleaning up after shim disconnected" id=86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3 namespace=k8s.io Jan 29 12:06:34.204812 containerd[2030]: time="2025-01-29T12:06:34.203614038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:34.210299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3-rootfs.mount: Deactivated successfully. Jan 29 12:06:34.411222 systemd[1]: run-containerd-runc-k8s.io-f687af8bbb0b4709bea9d328d3732865a66f52a5a37d30c67fa7defbbbb47b0a-runc.Ooz9RL.mount: Deactivated successfully. Jan 29 12:06:35.260684 kubelet[3422]: I0129 12:06:35.260618 3422 scope.go:117] "RemoveContainer" containerID="86782770ebfba847fcb97f541781bf768fa7dc34004ed2a228e1484439885bd3" Jan 29 12:06:35.264443 containerd[2030]: time="2025-01-29T12:06:35.264327807Z" level=info msg="CreateContainer within sandbox \"6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 12:06:35.288969 containerd[2030]: time="2025-01-29T12:06:35.288478602Z" level=info msg="CreateContainer within sandbox \"6f5ab9cf976ad60abcc99375279c2201fcd57eaee2ed9a2b7d7077d84a3e3bb5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"50ed8d8a16743801773b42695881005a2f7254f4a49819bcf33298f849522b2f\"" Jan 29 12:06:35.291208 containerd[2030]: time="2025-01-29T12:06:35.289556130Z" level=info msg="StartContainer for \"50ed8d8a16743801773b42695881005a2f7254f4a49819bcf33298f849522b2f\"" Jan 29 12:06:35.352444 systemd[1]: Started cri-containerd-50ed8d8a16743801773b42695881005a2f7254f4a49819bcf33298f849522b2f.scope - libcontainer container 50ed8d8a16743801773b42695881005a2f7254f4a49819bcf33298f849522b2f. Jan 29 12:06:35.423142 containerd[2030]: time="2025-01-29T12:06:35.422930081Z" level=info msg="StartContainer for \"50ed8d8a16743801773b42695881005a2f7254f4a49819bcf33298f849522b2f\" returns successfully" Jan 29 12:06:43.666058 kubelet[3422]: E0129 12:06:43.665344 3422 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"