Jan 30 14:00:29.171632 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 30 14:00:29.171676 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:00:29.171701 kernel: KASLR disabled due to lack of seed Jan 30 14:00:29.171717 kernel: efi: EFI v2.7 by EDK II Jan 30 14:00:29.171733 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 30 14:00:29.171748 kernel: ACPI: Early table checksum verification disabled Jan 30 14:00:29.171766 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 30 14:00:29.171781 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 30 14:00:29.171797 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 14:00:29.171812 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 30 14:00:29.171833 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 14:00:29.171849 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 30 14:00:29.171864 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 30 14:00:29.171880 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 30 14:00:29.171899 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 14:00:29.171919 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 30 14:00:29.171936 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 30 14:00:29.171952 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 30 14:00:29.171969 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 30 14:00:29.171985 kernel: printk: bootconsole [uart0] enabled Jan 30 14:00:29.172001 kernel: NUMA: Failed to initialise from firmware Jan 30 14:00:29.172017 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:00:29.172034 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 30 14:00:29.172050 kernel: Zone ranges: Jan 30 14:00:29.172066 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:00:29.172082 kernel: DMA32 empty Jan 30 14:00:29.172103 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 30 14:00:29.172119 kernel: Movable zone start for each node Jan 30 14:00:29.172135 kernel: Early memory node ranges Jan 30 14:00:29.172152 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 30 14:00:29.172168 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 30 14:00:29.172184 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 30 14:00:29.172200 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 30 14:00:29.172217 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 30 14:00:29.172233 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 30 14:00:29.172249 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 30 14:00:29.172265 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 30 14:00:29.172281 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:00:29.172329 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 30 14:00:29.172350 kernel: psci: probing for conduit method from ACPI. Jan 30 14:00:29.172376 kernel: psci: PSCIv1.0 detected in firmware. Jan 30 14:00:29.172394 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:00:29.172412 kernel: psci: Trusted OS migration not required Jan 30 14:00:29.172433 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:00:29.172451 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:00:29.172468 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:00:29.172486 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:00:29.172503 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:00:29.172520 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:00:29.172537 kernel: CPU features: detected: Spectre-v2 Jan 30 14:00:29.172555 kernel: CPU features: detected: Spectre-v3a Jan 30 14:00:29.172572 kernel: CPU features: detected: Spectre-BHB Jan 30 14:00:29.172589 kernel: CPU features: detected: ARM erratum 1742098 Jan 30 14:00:29.172606 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 30 14:00:29.172628 kernel: alternatives: applying boot alternatives Jan 30 14:00:29.172648 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:00:29.172666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:00:29.172684 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:00:29.172701 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:00:29.172718 kernel: Fallback order for Node 0: 0 Jan 30 14:00:29.172736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 30 14:00:29.172753 kernel: Policy zone: Normal Jan 30 14:00:29.172770 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:00:29.172787 kernel: software IO TLB: area num 2. Jan 30 14:00:29.172804 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 30 14:00:29.172827 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 30 14:00:29.172844 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:00:29.172862 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:00:29.172879 kernel: rcu: RCU event tracing is enabled. Jan 30 14:00:29.172897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:00:29.172915 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:00:29.172933 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:00:29.172950 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:00:29.172968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:00:29.172985 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:00:29.173002 kernel: GICv3: 96 SPIs implemented Jan 30 14:00:29.173023 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:00:29.173041 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:00:29.173058 kernel: GICv3: GICv3 features: 16 PPIs Jan 30 14:00:29.173075 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 30 14:00:29.173092 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 30 14:00:29.173109 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:00:29.173126 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:00:29.173144 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 30 14:00:29.173161 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 30 14:00:29.173179 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 30 14:00:29.173196 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:00:29.173213 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 30 14:00:29.173235 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 30 14:00:29.173252 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 30 14:00:29.173270 kernel: Console: colour dummy device 80x25 Jan 30 14:00:29.173287 kernel: printk: console [tty1] enabled Jan 30 14:00:29.173339 kernel: ACPI: Core revision 20230628 Jan 30 14:00:29.173360 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 30 14:00:29.173379 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:00:29.173397 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:00:29.173414 kernel: landlock: Up and running. Jan 30 14:00:29.173438 kernel: SELinux: Initializing. Jan 30 14:00:29.173456 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.173474 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.173491 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:29.173509 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:29.173527 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:00:29.173545 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:00:29.173563 kernel: Platform MSI: ITS@0x10080000 domain created Jan 30 14:00:29.173580 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 30 14:00:29.173602 kernel: Remapping and enabling EFI services. Jan 30 14:00:29.173620 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:00:29.173637 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:00:29.173655 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 30 14:00:29.173673 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 30 14:00:29.173690 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 30 14:00:29.173708 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:00:29.173726 kernel: SMP: Total of 2 processors activated. Jan 30 14:00:29.173743 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:00:29.173765 kernel: CPU features: detected: 32-bit EL1 Support Jan 30 14:00:29.173783 kernel: CPU features: detected: CRC32 instructions Jan 30 14:00:29.173801 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:00:29.173830 kernel: alternatives: applying system-wide alternatives Jan 30 14:00:29.173852 kernel: devtmpfs: initialized Jan 30 14:00:29.173871 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:00:29.173889 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:00:29.173908 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:00:29.173926 kernel: SMBIOS 3.0.0 present. Jan 30 14:00:29.173944 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 30 14:00:29.173967 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:00:29.173986 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:00:29.174005 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:00:29.174024 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:00:29.174042 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:00:29.174060 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jan 30 14:00:29.174079 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:00:29.174102 kernel: cpuidle: using governor menu Jan 30 14:00:29.174120 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:00:29.174139 kernel: ASID allocator initialised with 65536 entries Jan 30 14:00:29.174157 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:00:29.174176 kernel: Serial: AMBA PL011 UART driver Jan 30 14:00:29.174194 kernel: Modules: 17520 pages in range for non-PLT usage Jan 30 14:00:29.174212 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:00:29.174230 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:00:29.174250 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:00:29.174274 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:00:29.174293 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:00:29.174341 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:00:29.174361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:00:29.174380 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:00:29.174399 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:00:29.174417 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:00:29.174436 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:00:29.174455 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:00:29.174479 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:00:29.174498 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:00:29.174516 kernel: ACPI: Interpreter enabled Jan 30 14:00:29.174535 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:00:29.174568 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:00:29.174590 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 30 14:00:29.174894 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:00:29.175103 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:00:29.179374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:00:29.179657 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 30 14:00:29.179865 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 30 14:00:29.179892 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 30 14:00:29.179912 kernel: acpiphp: Slot [1] registered Jan 30 14:00:29.179931 kernel: acpiphp: Slot [2] registered Jan 30 14:00:29.179950 kernel: acpiphp: Slot [3] registered Jan 30 14:00:29.179968 kernel: acpiphp: Slot [4] registered Jan 30 14:00:29.179996 kernel: acpiphp: Slot [5] registered Jan 30 14:00:29.180015 kernel: acpiphp: Slot [6] registered Jan 30 14:00:29.180033 kernel: acpiphp: Slot [7] registered Jan 30 14:00:29.180052 kernel: acpiphp: Slot [8] registered Jan 30 14:00:29.180070 kernel: acpiphp: Slot [9] registered Jan 30 14:00:29.180088 kernel: acpiphp: Slot [10] registered Jan 30 14:00:29.180106 kernel: acpiphp: Slot [11] registered Jan 30 14:00:29.180125 kernel: acpiphp: Slot [12] registered Jan 30 14:00:29.180143 kernel: acpiphp: Slot [13] registered Jan 30 14:00:29.180161 kernel: acpiphp: Slot [14] registered Jan 30 14:00:29.180184 kernel: acpiphp: Slot [15] registered Jan 30 14:00:29.180202 kernel: acpiphp: Slot [16] registered Jan 30 14:00:29.180220 kernel: acpiphp: Slot [17] registered Jan 30 14:00:29.180239 kernel: acpiphp: Slot [18] registered Jan 30 14:00:29.180257 kernel: acpiphp: Slot [19] registered Jan 30 14:00:29.180275 kernel: acpiphp: Slot [20] registered Jan 30 14:00:29.180293 kernel: acpiphp: Slot [21] registered Jan 30 14:00:29.182390 kernel: acpiphp: Slot [22] registered Jan 30 14:00:29.182412 kernel: acpiphp: Slot [23] registered Jan 30 14:00:29.182441 kernel: acpiphp: Slot [24] registered Jan 30 14:00:29.182460 kernel: acpiphp: Slot [25] registered Jan 30 14:00:29.182479 kernel: acpiphp: Slot [26] registered Jan 30 14:00:29.182497 kernel: acpiphp: Slot [27] registered Jan 30 14:00:29.182515 kernel: acpiphp: Slot [28] registered Jan 30 14:00:29.182533 kernel: acpiphp: Slot [29] registered Jan 30 14:00:29.182567 kernel: acpiphp: Slot [30] registered Jan 30 14:00:29.182591 kernel: acpiphp: Slot [31] registered Jan 30 14:00:29.182610 kernel: PCI host bridge to bus 0000:00 Jan 30 14:00:29.182866 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 30 14:00:29.183120 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:00:29.183326 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 30 14:00:29.183527 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 30 14:00:29.183769 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 30 14:00:29.184005 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 30 14:00:29.184219 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 30 14:00:29.187633 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 14:00:29.187866 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 30 14:00:29.188070 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:00:29.188287 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 14:00:29.192697 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 30 14:00:29.192910 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 30 14:00:29.193123 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 30 14:00:29.193359 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:00:29.193569 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 30 14:00:29.193774 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 30 14:00:29.193982 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 30 14:00:29.194185 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 30 14:00:29.196516 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 30 14:00:29.196736 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 30 14:00:29.196919 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:00:29.197101 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 30 14:00:29.197127 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:00:29.197147 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:00:29.197167 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:00:29.197186 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:00:29.197204 kernel: iommu: Default domain type: Translated Jan 30 14:00:29.197223 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:00:29.197247 kernel: efivars: Registered efivars operations Jan 30 14:00:29.197265 kernel: vgaarb: loaded Jan 30 14:00:29.197284 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:00:29.197322 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:00:29.197344 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:00:29.197363 kernel: pnp: PnP ACPI init Jan 30 14:00:29.197598 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 30 14:00:29.197633 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:00:29.197661 kernel: NET: Registered PF_INET protocol family Jan 30 14:00:29.197681 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:00:29.197700 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:00:29.197720 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:00:29.197739 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:00:29.197758 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:00:29.197777 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:00:29.197797 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.197816 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.197839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:00:29.197859 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:00:29.197877 kernel: kvm [1]: HYP mode not available Jan 30 14:00:29.197896 kernel: Initialise system trusted keyrings Jan 30 14:00:29.197915 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:00:29.197934 kernel: Key type asymmetric registered Jan 30 14:00:29.197953 kernel: Asymmetric key parser 'x509' registered Jan 30 14:00:29.197972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:00:29.197991 kernel: io scheduler mq-deadline registered Jan 30 14:00:29.198015 kernel: io scheduler kyber registered Jan 30 14:00:29.198034 kernel: io scheduler bfq registered Jan 30 14:00:29.198255 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 30 14:00:29.198283 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:00:29.200364 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:00:29.200403 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 30 14:00:29.200423 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 14:00:29.200442 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:00:29.200474 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:00:29.200763 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 30 14:00:29.200792 kernel: printk: console [ttyS0] disabled Jan 30 14:00:29.200812 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 30 14:00:29.200831 kernel: printk: console [ttyS0] enabled Jan 30 14:00:29.200850 kernel: printk: bootconsole [uart0] disabled Jan 30 14:00:29.200868 kernel: thunder_xcv, ver 1.0 Jan 30 14:00:29.200887 kernel: thunder_bgx, ver 1.0 Jan 30 14:00:29.200906 kernel: nicpf, ver 1.0 Jan 30 14:00:29.200932 kernel: nicvf, ver 1.0 Jan 30 14:00:29.201268 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:00:29.201534 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:00:28 UTC (1738245628) Jan 30 14:00:29.201563 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:00:29.201583 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 30 14:00:29.201602 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:00:29.201622 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:00:29.201641 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:00:29.201669 kernel: Segment Routing with IPv6 Jan 30 14:00:29.201688 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:00:29.201706 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:00:29.201724 kernel: Key type dns_resolver registered Jan 30 14:00:29.201743 kernel: registered taskstats version 1 Jan 30 14:00:29.201762 kernel: Loading compiled-in X.509 certificates Jan 30 14:00:29.201781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:00:29.201799 kernel: Key type .fscrypt registered Jan 30 14:00:29.201817 kernel: Key type fscrypt-provisioning registered Jan 30 14:00:29.201839 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:00:29.201858 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:00:29.201877 kernel: ima: No architecture policies found Jan 30 14:00:29.201895 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:00:29.201914 kernel: clk: Disabling unused clocks Jan 30 14:00:29.201932 kernel: Freeing unused kernel memory: 39360K Jan 30 14:00:29.201950 kernel: Run /init as init process Jan 30 14:00:29.201969 kernel: with arguments: Jan 30 14:00:29.201987 kernel: /init Jan 30 14:00:29.202005 kernel: with environment: Jan 30 14:00:29.202027 kernel: HOME=/ Jan 30 14:00:29.202046 kernel: TERM=linux Jan 30 14:00:29.202063 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:00:29.202086 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:00:29.202109 systemd[1]: Detected virtualization amazon. Jan 30 14:00:29.202130 systemd[1]: Detected architecture arm64. Jan 30 14:00:29.202150 systemd[1]: Running in initrd. Jan 30 14:00:29.202174 systemd[1]: No hostname configured, using default hostname. Jan 30 14:00:29.202194 systemd[1]: Hostname set to . Jan 30 14:00:29.202214 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:00:29.202234 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:00:29.202254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:29.202274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:29.204192 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:00:29.204246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:00:29.204278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:00:29.204319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:00:29.204349 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:00:29.204371 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:00:29.204392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:29.204412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:29.204432 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:00:29.204459 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:00:29.204479 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:00:29.204499 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:00:29.204519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:00:29.204540 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:00:29.204561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:00:29.204581 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:00:29.204601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:29.204622 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:29.204647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:29.204667 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:00:29.204687 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:00:29.204708 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:00:29.204728 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:00:29.204748 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:00:29.204768 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:00:29.204788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:00:29.204813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:29.204833 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:00:29.204854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:29.204918 systemd-journald[250]: Collecting audit messages is disabled. Jan 30 14:00:29.204967 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:00:29.204990 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:00:29.205010 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:00:29.205030 systemd-journald[250]: Journal started Jan 30 14:00:29.205072 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2f9760801b8cf39d69cbf6323cf721) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:00:29.181251 systemd-modules-load[251]: Inserted module 'overlay' Jan 30 14:00:29.210514 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:00:29.210590 kernel: Bridge firewalling registered Jan 30 14:00:29.216106 systemd-modules-load[251]: Inserted module 'br_netfilter' Jan 30 14:00:29.220097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:29.231151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:29.235638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:29.251614 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:29.260500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:29.281636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:00:29.294679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:00:29.301721 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:29.312940 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:00:29.340763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:29.347513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:29.352180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:29.374732 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:00:29.386954 dracut-cmdline[282]: dracut-dracut-053 Jan 30 14:00:29.398866 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:00:29.455058 systemd-resolved[287]: Positive Trust Anchors: Jan 30 14:00:29.455096 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:00:29.455161 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:00:29.550352 kernel: SCSI subsystem initialized Jan 30 14:00:29.557352 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:00:29.570431 kernel: iscsi: registered transport (tcp) Jan 30 14:00:29.592382 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:00:29.592454 kernel: QLogic iSCSI HBA Driver Jan 30 14:00:29.671342 kernel: random: crng init done Jan 30 14:00:29.669642 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 30 14:00:29.673225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:00:29.674085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:29.699920 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:00:29.713712 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:00:29.745411 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:00:29.745541 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:00:29.747126 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:00:29.812344 kernel: raid6: neonx8 gen() 6758 MB/s Jan 30 14:00:29.829332 kernel: raid6: neonx4 gen() 6559 MB/s Jan 30 14:00:29.846330 kernel: raid6: neonx2 gen() 5452 MB/s Jan 30 14:00:29.863331 kernel: raid6: neonx1 gen() 3966 MB/s Jan 30 14:00:29.880332 kernel: raid6: int64x8 gen() 3815 MB/s Jan 30 14:00:29.897331 kernel: raid6: int64x4 gen() 3726 MB/s Jan 30 14:00:29.914331 kernel: raid6: int64x2 gen() 3618 MB/s Jan 30 14:00:29.932135 kernel: raid6: int64x1 gen() 2761 MB/s Jan 30 14:00:29.932168 kernel: raid6: using algorithm neonx8 gen() 6758 MB/s Jan 30 14:00:29.950186 kernel: raid6: .... xor() 4823 MB/s, rmw enabled Jan 30 14:00:29.950224 kernel: raid6: using neon recovery algorithm Jan 30 14:00:29.958711 kernel: xor: measuring software checksum speed Jan 30 14:00:29.958761 kernel: 8regs : 10971 MB/sec Jan 30 14:00:29.959886 kernel: 32regs : 11941 MB/sec Jan 30 14:00:29.961140 kernel: arm64_neon : 9562 MB/sec Jan 30 14:00:29.961172 kernel: xor: using function: 32regs (11941 MB/sec) Jan 30 14:00:30.044343 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:00:30.063924 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:00:30.083610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:30.126732 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 30 14:00:30.134688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:30.154894 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:00:30.182790 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jan 30 14:00:30.237658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:00:30.250650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:00:30.385033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:30.397822 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:00:30.441159 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:00:30.446145 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:00:30.462765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:30.467676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:00:30.487614 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:00:30.530173 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:00:30.596372 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:00:30.596434 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 30 14:00:30.630917 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 14:00:30.631185 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 14:00:30.631458 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ea:ad:02:7e:19 Jan 30 14:00:30.631695 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:00:30.606752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:00:30.635510 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 14:00:30.606980 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:30.609554 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:30.611593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:30.611840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:30.613995 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:30.635844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:30.654342 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 14:00:30.665247 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:00:30.665336 kernel: GPT:9289727 != 16777215 Jan 30 14:00:30.665364 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:00:30.667804 kernel: GPT:9289727 != 16777215 Jan 30 14:00:30.667852 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:00:30.667878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:30.672322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:30.679023 (udev-worker)[536]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:00:30.684637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:30.744375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:30.761209 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (516) Jan 30 14:00:30.792352 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (533) Jan 30 14:00:30.856527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 14:00:30.904525 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 14:00:30.933010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:00:30.947326 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 14:00:30.949683 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 14:00:30.964565 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:00:30.980360 disk-uuid[662]: Primary Header is updated. Jan 30 14:00:30.980360 disk-uuid[662]: Secondary Entries is updated. Jan 30 14:00:30.980360 disk-uuid[662]: Secondary Header is updated. Jan 30 14:00:30.992377 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:31.001343 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:31.010363 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:32.011363 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:32.011854 disk-uuid[663]: The operation has completed successfully. Jan 30 14:00:32.213564 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:00:32.213790 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:00:32.261602 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:00:32.279856 sh[1004]: Success Jan 30 14:00:32.305337 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:00:32.422775 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:00:32.427657 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:00:32.437381 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:00:32.475981 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:00:32.476052 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:32.476080 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:00:32.476106 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:00:32.478317 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:00:32.616342 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:00:32.641451 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:00:32.645215 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:00:32.656585 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:00:32.667657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:00:32.696893 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:32.696973 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:32.698204 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:00:32.705412 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:00:32.723285 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:00:32.726360 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:32.740392 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:00:32.755830 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:00:32.859057 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:00:32.877626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:00:32.925466 systemd-networkd[1197]: lo: Link UP Jan 30 14:00:32.925479 systemd-networkd[1197]: lo: Gained carrier Jan 30 14:00:32.930902 systemd-networkd[1197]: Enumeration completed Jan 30 14:00:32.931057 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:00:32.933983 systemd[1]: Reached target network.target - Network. Jan 30 14:00:32.939874 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:32.939893 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:00:32.948406 systemd-networkd[1197]: eth0: Link UP Jan 30 14:00:32.948420 systemd-networkd[1197]: eth0: Gained carrier Jan 30 14:00:32.948438 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:32.962409 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.25.132/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:00:33.146260 ignition[1109]: Ignition 2.19.0 Jan 30 14:00:33.146288 ignition[1109]: Stage: fetch-offline Jan 30 14:00:33.147881 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.147909 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.152823 ignition[1109]: Ignition finished successfully Jan 30 14:00:33.156357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:00:33.168705 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:00:33.191231 ignition[1205]: Ignition 2.19.0 Jan 30 14:00:33.191259 ignition[1205]: Stage: fetch Jan 30 14:00:33.192510 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.192536 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.192688 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.208404 ignition[1205]: PUT result: OK Jan 30 14:00:33.211190 ignition[1205]: parsed url from cmdline: "" Jan 30 14:00:33.211212 ignition[1205]: no config URL provided Jan 30 14:00:33.211228 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:00:33.211281 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:00:33.211342 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.213095 ignition[1205]: PUT result: OK Jan 30 14:00:33.213176 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 14:00:33.215387 ignition[1205]: GET result: OK Jan 30 14:00:33.217064 ignition[1205]: parsing config with SHA512: abd39885cb11d6723da761dd281f74e7b6c9fe81b231511200b06c1ff5450770792683d3e605198d37369ad7ba03478b40620eaa0e41edecd68fb94578281ea6 Jan 30 14:00:33.230158 unknown[1205]: fetched base config from "system" Jan 30 14:00:33.230232 unknown[1205]: fetched base config from "system" Jan 30 14:00:33.231932 unknown[1205]: fetched user config from "aws" Jan 30 14:00:33.234994 ignition[1205]: fetch: fetch complete Jan 30 14:00:33.235008 ignition[1205]: fetch: fetch passed Jan 30 14:00:33.235116 ignition[1205]: Ignition finished successfully Jan 30 14:00:33.245357 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:00:33.258734 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:00:33.281349 ignition[1211]: Ignition 2.19.0 Jan 30 14:00:33.281377 ignition[1211]: Stage: kargs Jan 30 14:00:33.282000 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.282025 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.282180 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.285082 ignition[1211]: PUT result: OK Jan 30 14:00:33.294978 ignition[1211]: kargs: kargs passed Jan 30 14:00:33.295275 ignition[1211]: Ignition finished successfully Jan 30 14:00:33.301350 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:00:33.318593 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:00:33.343994 ignition[1217]: Ignition 2.19.0 Jan 30 14:00:33.344020 ignition[1217]: Stage: disks Jan 30 14:00:33.344669 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.344695 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.344853 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.353518 ignition[1217]: PUT result: OK Jan 30 14:00:33.357850 ignition[1217]: disks: disks passed Jan 30 14:00:33.358003 ignition[1217]: Ignition finished successfully Jan 30 14:00:33.361692 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:00:33.365721 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:00:33.367837 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:00:33.373933 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:00:33.375764 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:00:33.377605 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:00:33.392679 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:00:33.434625 systemd-fsck[1226]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:00:33.438349 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:00:33.450512 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:00:33.532335 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:00:33.533832 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:00:33.537725 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:00:33.546560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:00:33.573511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:00:33.581471 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:00:33.593144 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1245) Jan 30 14:00:33.593184 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:33.593212 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:33.593239 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:00:33.581683 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:00:33.581735 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:00:33.602347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:00:33.612743 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:00:33.621336 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:00:33.622236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:00:33.986823 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:00:34.017698 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:00:34.026643 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:00:34.035248 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:00:34.169437 systemd-networkd[1197]: eth0: Gained IPv6LL Jan 30 14:00:34.407703 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:00:34.419532 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:00:34.433850 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:00:34.449502 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:34.448268 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:00:34.492523 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:00:34.498386 ignition[1359]: INFO : Ignition 2.19.0 Jan 30 14:00:34.498386 ignition[1359]: INFO : Stage: mount Jan 30 14:00:34.501495 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:34.501495 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:34.505499 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:34.508410 ignition[1359]: INFO : PUT result: OK Jan 30 14:00:34.512728 ignition[1359]: INFO : mount: mount passed Jan 30 14:00:34.514415 ignition[1359]: INFO : Ignition finished successfully Jan 30 14:00:34.517996 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:00:34.527511 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:00:34.566712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:00:34.589360 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1370) Jan 30 14:00:34.593166 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:34.593207 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:34.593233 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:00:34.599337 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:00:34.603282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:00:34.636125 ignition[1386]: INFO : Ignition 2.19.0 Jan 30 14:00:34.636125 ignition[1386]: INFO : Stage: files Jan 30 14:00:34.639267 ignition[1386]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:34.639267 ignition[1386]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:34.639267 ignition[1386]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:34.646218 ignition[1386]: INFO : PUT result: OK Jan 30 14:00:34.650936 ignition[1386]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:00:34.662124 ignition[1386]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:00:34.662124 ignition[1386]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:00:34.717424 ignition[1386]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:00:34.720169 ignition[1386]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:00:34.723024 unknown[1386]: wrote ssh authorized keys file for user: core Jan 30 14:00:34.725476 ignition[1386]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:00:34.727919 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 30 14:00:34.727919 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 30 14:00:34.835419 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:00:35.005370 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 30 14:00:35.008805 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:00:35.012269 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:00:35.015433 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:00:35.018492 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:00:35.018492 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:00:35.024622 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:00:35.027669 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:00:35.031024 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:00:35.034359 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:00:35.037573 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:00:35.037573 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 14:00:35.037573 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 14:00:35.037573 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 14:00:35.037573 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 30 14:00:35.593533 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:00:36.071228 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 14:00:36.071228 ignition[1386]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:00:36.077793 ignition[1386]: INFO : files: files passed Jan 30 14:00:36.077793 ignition[1386]: INFO : Ignition finished successfully Jan 30 14:00:36.101778 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:00:36.117735 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:00:36.125866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:00:36.141623 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:00:36.143740 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:00:36.156071 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:36.156071 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:36.163953 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:36.169376 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:00:36.172725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:00:36.191687 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:00:36.256661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:00:36.257050 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:00:36.261188 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:00:36.263126 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:00:36.270659 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:00:36.287542 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:00:36.314359 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:00:36.329075 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:00:36.351664 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:36.354291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:36.359638 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:00:36.362845 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:00:36.363075 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:00:36.369969 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:00:36.371990 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:00:36.375525 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:00:36.377896 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:00:36.381382 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:00:36.384125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:00:36.393256 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:00:36.396560 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:00:36.401659 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:00:36.404282 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:00:36.408377 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:00:36.408772 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:00:36.415032 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:36.418861 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:36.423038 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:00:36.425425 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:36.427989 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:00:36.428284 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:00:36.435597 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:00:36.436014 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:00:36.442395 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:00:36.442785 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:00:36.453720 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:00:36.469703 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:00:36.473290 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:00:36.473607 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:36.478735 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:00:36.478970 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:00:36.498898 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:00:36.499652 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:00:36.513064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:00:36.520915 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:00:36.523616 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:00:36.533222 ignition[1441]: INFO : Ignition 2.19.0 Jan 30 14:00:36.533222 ignition[1441]: INFO : Stage: umount Jan 30 14:00:36.537607 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:36.537607 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:36.537607 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:36.554690 ignition[1441]: INFO : PUT result: OK Jan 30 14:00:36.559199 ignition[1441]: INFO : umount: umount passed Jan 30 14:00:36.561823 ignition[1441]: INFO : Ignition finished successfully Jan 30 14:00:36.563555 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:00:36.564249 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:00:36.567742 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:00:36.567836 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:00:36.570729 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:00:36.570809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:00:36.572661 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:00:36.572735 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:00:36.575338 systemd[1]: Stopped target network.target - Network. Jan 30 14:00:36.576925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:00:36.577007 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:00:36.579236 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:00:36.580815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:00:36.582468 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:36.585579 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:00:36.587548 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:00:36.589790 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:00:36.589865 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:00:36.618337 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:00:36.618419 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:00:36.620476 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:00:36.620557 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:00:36.627176 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:00:36.627261 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:00:36.634206 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:00:36.634291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:00:36.636518 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:00:36.638488 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:00:36.648355 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jan 30 14:00:36.651659 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:00:36.651895 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:00:36.657481 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:00:36.657602 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:36.672607 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:00:36.672884 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:00:36.672988 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:00:36.673371 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:36.678442 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:00:36.678693 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:00:36.693290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:00:36.693454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:36.707384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:00:36.707495 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:36.712701 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:00:36.712795 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:36.720928 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:00:36.723230 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:36.729056 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:00:36.730353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:00:36.753054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:00:36.753160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:36.755502 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:00:36.755567 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:36.757510 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:00:36.757595 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:00:36.760142 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:00:36.760225 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:00:36.776014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:00:36.776110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:36.791738 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:00:36.795498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:00:36.795615 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:36.798265 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:00:36.798374 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:36.801103 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:00:36.801177 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:36.804011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:36.804085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:36.847754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:00:36.848158 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:00:36.854791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:00:36.870701 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:00:36.887270 systemd[1]: Switching root. Jan 30 14:00:36.923421 systemd-journald[250]: Journal stopped Jan 30 14:00:38.907614 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 30 14:00:38.907742 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:00:38.907847 kernel: SELinux: policy capability open_perms=1 Jan 30 14:00:38.907883 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:00:38.907915 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:00:38.907947 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:00:38.907977 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:00:38.908016 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:00:38.908051 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:00:38.908082 kernel: audit: type=1403 audit(1738245637.351:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:00:38.908125 systemd[1]: Successfully loaded SELinux policy in 49.387ms. Jan 30 14:00:38.908177 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.085ms. Jan 30 14:00:38.908212 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:00:38.908243 systemd[1]: Detected virtualization amazon. Jan 30 14:00:38.908275 systemd[1]: Detected architecture arm64. Jan 30 14:00:38.908337 systemd[1]: Detected first boot. Jan 30 14:00:38.908372 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:00:38.908408 zram_generator::config[1483]: No configuration found. Jan 30 14:00:38.908443 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:00:38.908476 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:00:38.908509 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:00:38.908539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:00:38.908571 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:00:38.908605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:00:38.908636 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:00:38.908672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:00:38.908705 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:00:38.908745 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:00:38.908776 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:00:38.908808 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:00:38.908842 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:38.908874 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:38.908904 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:00:38.908940 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:00:38.908971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:00:38.909004 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:00:38.909035 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:00:38.909065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:38.909097 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:00:38.909130 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:00:38.909162 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:00:38.909196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:00:38.909226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:38.909258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:00:38.909289 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:00:38.911362 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:00:38.911398 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:00:38.911464 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:00:38.911499 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:38.911531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:38.911570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:38.911603 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:00:38.911635 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:00:38.911664 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:00:38.911696 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:00:38.911727 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:00:38.911760 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:00:38.911790 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:00:38.911823 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:00:38.911860 systemd[1]: Reached target machines.target - Containers. Jan 30 14:00:38.911892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:00:38.911925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:38.911957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:00:38.911990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:00:38.912022 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:38.912054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:00:38.912087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:38.912122 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:00:38.912152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:38.912185 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:00:38.912217 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:00:38.912249 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:00:38.912279 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:00:38.912340 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:00:38.912376 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:00:38.912407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:00:38.912444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:00:38.914383 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:00:38.914427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:00:38.914461 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:00:38.914494 systemd[1]: Stopped verity-setup.service. Jan 30 14:00:38.914525 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:00:38.914572 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:00:38.914619 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:00:38.914654 kernel: fuse: init (API version 7.39) Jan 30 14:00:38.914692 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:00:38.914726 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:00:38.914758 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:00:38.914788 kernel: loop: module loaded Jan 30 14:00:38.914816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:38.914852 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:00:38.914882 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:00:38.914912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:38.914946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:38.914976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:38.915006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:38.915036 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:00:38.915108 systemd-journald[1572]: Collecting audit messages is disabled. Jan 30 14:00:38.915171 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:00:38.915202 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:38.915233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:38.915267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:38.915295 systemd-journald[1572]: Journal started Jan 30 14:00:38.920645 systemd-journald[1572]: Runtime Journal (/run/log/journal/ec2f9760801b8cf39d69cbf6323cf721) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:00:38.920728 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:00:38.366278 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:00:38.390516 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 14:00:38.391438 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:00:38.930441 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:00:38.931480 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:00:38.948509 kernel: ACPI: bus type drm_connector registered Jan 30 14:00:38.950822 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:00:38.951225 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:00:38.971054 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:00:38.978054 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:00:38.987656 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:00:39.005210 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:00:39.007469 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:00:39.007532 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:00:39.015406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:00:39.035612 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:00:39.041945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:00:39.046662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:39.057535 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:00:39.068791 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:00:39.071482 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:39.074428 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:00:39.076528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:39.081179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:39.095818 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:00:39.103674 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:00:39.111616 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:00:39.138574 systemd-journald[1572]: Time spent on flushing to /var/log/journal/ec2f9760801b8cf39d69cbf6323cf721 is 204.987ms for 906 entries. Jan 30 14:00:39.138574 systemd-journald[1572]: System Journal (/var/log/journal/ec2f9760801b8cf39d69cbf6323cf721) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:00:39.371053 systemd-journald[1572]: Received client request to flush runtime journal. Jan 30 14:00:39.371136 kernel: loop0: detected capacity change from 0 to 201592 Jan 30 14:00:39.371180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:00:39.115705 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:00:39.118942 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:00:39.149396 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:00:39.153631 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:00:39.171112 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:00:39.306889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:39.312849 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Jan 30 14:00:39.312874 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Jan 30 14:00:39.336843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:00:39.340369 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:39.343248 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:00:39.357807 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:00:39.385192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:39.388507 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:00:39.408894 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:00:39.418357 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 14:00:39.463030 udevadm[1631]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:00:39.479372 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:00:39.489694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:00:39.558619 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Jan 30 14:00:39.558657 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Jan 30 14:00:39.568344 kernel: loop2: detected capacity change from 0 to 114328 Jan 30 14:00:39.569082 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:39.609123 kernel: loop3: detected capacity change from 0 to 52536 Jan 30 14:00:39.671369 kernel: loop4: detected capacity change from 0 to 201592 Jan 30 14:00:39.712402 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 14:00:39.734348 kernel: loop6: detected capacity change from 0 to 114328 Jan 30 14:00:39.764437 kernel: loop7: detected capacity change from 0 to 52536 Jan 30 14:00:39.790772 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 14:00:39.792391 (sd-merge)[1641]: Merged extensions into '/usr'. Jan 30 14:00:39.808192 systemd[1]: Reloading requested from client PID 1612 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:00:39.808393 systemd[1]: Reloading... Jan 30 14:00:39.973393 zram_generator::config[1664]: No configuration found. Jan 30 14:00:40.104040 ldconfig[1607]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:00:40.292647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:40.414668 systemd[1]: Reloading finished in 604 ms. Jan 30 14:00:40.458438 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:00:40.461162 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:00:40.464095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:00:40.481657 systemd[1]: Starting ensure-sysext.service... Jan 30 14:00:40.492556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:00:40.499636 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:40.521906 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:00:40.521924 systemd[1]: Reloading... Jan 30 14:00:40.545953 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:00:40.549254 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:00:40.555037 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:00:40.558370 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jan 30 14:00:40.558673 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jan 30 14:00:40.573205 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:00:40.575359 systemd-tmpfiles[1721]: Skipping /boot Jan 30 14:00:40.606159 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:00:40.606401 systemd-tmpfiles[1721]: Skipping /boot Jan 30 14:00:40.655027 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Jan 30 14:00:40.706421 zram_generator::config[1751]: No configuration found. Jan 30 14:00:40.884686 (udev-worker)[1763]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:00:41.055532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:41.143331 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1769) Jan 30 14:00:41.230671 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:00:41.231213 systemd[1]: Reloading finished in 708 ms. Jan 30 14:00:41.257157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:41.269102 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:41.329873 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:41.339792 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:00:41.346248 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:00:41.353889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:00:41.362437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:00:41.367013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:00:41.376801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:41.405578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:41.409937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:41.415967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:41.423923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:41.427637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:41.445692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:41.446048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:41.453438 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:00:41.466490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:41.470004 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:00:41.472177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:41.472579 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:00:41.499816 systemd[1]: Finished ensure-sysext.service. Jan 30 14:00:41.539141 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:00:41.544529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:00:41.575255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:41.576046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:41.600005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:41.601177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:41.603228 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:41.605732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:41.624385 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:00:41.627166 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:00:41.628084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:00:41.642489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:00:41.654648 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:00:41.667598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:00:41.668090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:41.669094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:41.676764 augenrules[1953]: No rules Jan 30 14:00:41.674456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:00:41.680687 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:41.721855 lvm[1952]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:00:41.746138 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:00:41.751943 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:00:41.788748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:00:41.803567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:00:41.806797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:00:41.812499 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:00:41.819065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:41.823728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:41.834891 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:00:41.868218 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:00:41.904406 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:00:41.939771 systemd-networkd[1917]: lo: Link UP Jan 30 14:00:41.939791 systemd-networkd[1917]: lo: Gained carrier Jan 30 14:00:41.942384 systemd-networkd[1917]: Enumeration completed Jan 30 14:00:41.942592 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:00:41.945257 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:41.945278 systemd-networkd[1917]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:00:41.947254 systemd-networkd[1917]: eth0: Link UP Jan 30 14:00:41.948031 systemd-networkd[1917]: eth0: Gained carrier Jan 30 14:00:41.948154 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:41.954650 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:00:41.962476 systemd-networkd[1917]: eth0: DHCPv4 address 172.31.25.132/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:00:41.972436 systemd-resolved[1918]: Positive Trust Anchors: Jan 30 14:00:41.972476 systemd-resolved[1918]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:00:41.972541 systemd-resolved[1918]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:00:41.980754 systemd-resolved[1918]: Defaulting to hostname 'linux'. Jan 30 14:00:41.983873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:00:41.986125 systemd[1]: Reached target network.target - Network. Jan 30 14:00:41.987888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:41.990118 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:00:41.992323 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:00:41.994693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:00:41.997358 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:00:41.999546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:00:42.001866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:00:42.004177 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:00:42.004224 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:00:42.005936 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:00:42.009028 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:00:42.013555 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:00:42.022848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:00:42.026231 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:00:42.028796 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:00:42.031289 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:00:42.033479 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:00:42.033532 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:00:42.040586 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:00:42.052281 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:00:42.058788 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:00:42.073719 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:00:42.088537 jq[1984]: false Jan 30 14:00:42.094774 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:00:42.098492 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:00:42.108753 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:00:42.123891 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 14:00:42.136491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:00:42.142661 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 14:00:42.150671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:00:42.159655 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:00:42.170874 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:00:42.173664 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:00:42.175990 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:00:42.179673 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:00:42.188579 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:00:42.194831 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:00:42.198421 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:00:42.253044 dbus-daemon[1983]: [system] SELinux support is enabled Jan 30 14:00:42.263217 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1917 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 14:00:42.264110 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:00:42.287193 coreos-metadata[1982]: Jan 30 14:00:42.286 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:00:42.286855 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:00:42.286955 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:00:42.289514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:00:42.289560 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:00:42.293727 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:00:42.305284 coreos-metadata[1982]: Jan 30 14:00:42.297 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 14:00:42.305284 coreos-metadata[1982]: Jan 30 14:00:42.304 INFO Fetch successful Jan 30 14:00:42.305284 coreos-metadata[1982]: Jan 30 14:00:42.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 14:00:42.306216 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 14:00:42.315363 jq[1996]: true Jan 30 14:00:42.317927 coreos-metadata[1982]: Jan 30 14:00:42.311 INFO Fetch successful Jan 30 14:00:42.317927 coreos-metadata[1982]: Jan 30 14:00:42.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 14:00:42.312632 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:00:42.315413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:00:42.333005 coreos-metadata[1982]: Jan 30 14:00:42.325 INFO Fetch successful Jan 30 14:00:42.333005 coreos-metadata[1982]: Jan 30 14:00:42.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 14:00:42.333005 coreos-metadata[1982]: Jan 30 14:00:42.328 INFO Fetch successful Jan 30 14:00:42.333005 coreos-metadata[1982]: Jan 30 14:00:42.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 14:00:42.333005 coreos-metadata[1982]: Jan 30 14:00:42.332 INFO Fetch failed with 404: resource not found Jan 30 14:00:42.333005 coreos-metadata[1982]: Jan 30 14:00:42.332 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 14:00:42.345452 coreos-metadata[1982]: Jan 30 14:00:42.337 INFO Fetch successful Jan 30 14:00:42.345452 coreos-metadata[1982]: Jan 30 14:00:42.338 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 14:00:42.348472 coreos-metadata[1982]: Jan 30 14:00:42.348 INFO Fetch successful Jan 30 14:00:42.348472 coreos-metadata[1982]: Jan 30 14:00:42.348 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 14:00:42.349284 coreos-metadata[1982]: Jan 30 14:00:42.349 INFO Fetch successful Jan 30 14:00:42.351161 coreos-metadata[1982]: Jan 30 14:00:42.349 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 14:00:42.361424 coreos-metadata[1982]: Jan 30 14:00:42.355 INFO Fetch successful Jan 30 14:00:42.361424 coreos-metadata[1982]: Jan 30 14:00:42.356 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 14:00:42.355626 (ntainerd)[2008]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:00:42.373333 coreos-metadata[1982]: Jan 30 14:00:42.364 INFO Fetch successful Jan 30 14:00:42.373460 update_engine[1995]: I20250130 14:00:42.367462 1995 main.cc:92] Flatcar Update Engine starting Jan 30 14:00:42.378157 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: ---------------------------------------------------- Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: available at https://www.nwtime.org/support Jan 30 14:00:42.379677 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: ---------------------------------------------------- Jan 30 14:00:42.378216 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:00:42.393999 update_engine[1995]: I20250130 14:00:42.390385 1995 update_check_scheduler.cc:74] Next update check in 3m59s Jan 30 14:00:42.394058 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: proto: precision = 0.096 usec (-23) Jan 30 14:00:42.394058 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: basedate set to 2025-01-17 Jan 30 14:00:42.394058 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: gps base set to 2025-01-19 (week 2350) Jan 30 14:00:42.394239 extend-filesystems[1985]: Found loop4 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found loop5 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found loop6 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found loop7 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p1 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p2 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p3 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found usr Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p4 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p6 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p7 Jan 30 14:00:42.394239 extend-filesystems[1985]: Found nvme0n1p9 Jan 30 14:00:42.394239 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Jan 30 14:00:42.378238 ntpd[1989]: ---------------------------------------------------- Jan 30 14:00:42.563332 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 14:00:42.563382 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Jan 30 14:00:42.396807 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Listen normally on 3 eth0 172.31.25.132:123 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: bind(21) AF_INET6 fe80::4ea:adff:fe02:7e19%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: unable to create socket on eth0 (5) for fe80::4ea:adff:fe02:7e19%2#123 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: failed to init interface for address fe80::4ea:adff:fe02:7e19%2 Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.602605 ntpd[1989]: 30 Jan 14:00:42 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.605405 tar[2007]: linux-arm64/LICENSE Jan 30 14:00:42.605405 tar[2007]: linux-arm64/helm Jan 30 14:00:42.378257 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:00:42.626116 extend-filesystems[2037]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:00:42.445693 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:00:42.636005 jq[2018]: true Jan 30 14:00:42.378276 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:00:42.476947 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:00:42.378295 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 30 14:00:42.477330 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:00:42.378342 ntpd[1989]: available at https://www.nwtime.org/support Jan 30 14:00:42.605549 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 14:00:42.378362 ntpd[1989]: ---------------------------------------------------- Jan 30 14:00:42.611060 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:00:42.387217 ntpd[1989]: proto: precision = 0.096 usec (-23) Jan 30 14:00:42.620176 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:00:42.387675 ntpd[1989]: basedate set to 2025-01-17 Jan 30 14:00:42.625107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:00:42.387701 ntpd[1989]: gps base set to 2025-01-19 (week 2350) Jan 30 14:00:42.421702 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:00:42.421782 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:00:42.422036 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:00:42.422105 ntpd[1989]: Listen normally on 3 eth0 172.31.25.132:123 Jan 30 14:00:42.422172 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 30 14:00:42.422242 ntpd[1989]: bind(21) AF_INET6 fe80::4ea:adff:fe02:7e19%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:00:42.422282 ntpd[1989]: unable to create socket on eth0 (5) for fe80::4ea:adff:fe02:7e19%2#123 Jan 30 14:00:42.422334 ntpd[1989]: failed to init interface for address fe80::4ea:adff:fe02:7e19%2 Jan 30 14:00:42.422389 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jan 30 14:00:42.491868 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.491918 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.686347 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 14:00:42.698073 extend-filesystems[2037]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 14:00:42.698073 extend-filesystems[2037]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 14:00:42.698073 extend-filesystems[2037]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 14:00:42.739227 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Jan 30 14:00:42.701170 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:00:42.702627 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:00:42.796393 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:00:42.796928 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 30 14:00:42.797644 systemd-logind[1994]: New seat seat0. Jan 30 14:00:42.803334 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:00:42.831991 bash[2070]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:00:42.838141 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:00:42.894360 systemd[1]: Starting sshkeys.service... Jan 30 14:00:42.940898 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:00:42.952004 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:00:43.001600 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1764) Jan 30 14:00:43.001797 systemd-networkd[1917]: eth0: Gained IPv6LL Jan 30 14:00:43.015418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:00:43.021989 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:00:43.116761 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 14:00:43.127110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:43.139054 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:00:43.195480 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 14:00:43.195791 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 14:00:43.202597 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2012 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 14:00:43.214882 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 14:00:43.324458 polkitd[2135]: Started polkitd version 121 Jan 30 14:00:43.330636 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:00:43.399293 amazon-ssm-agent[2088]: Initializing new seelog logger Jan 30 14:00:43.402627 polkitd[2135]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 14:00:43.405722 amazon-ssm-agent[2088]: New Seelog Logger Creation Complete Jan 30 14:00:43.405722 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.405722 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.405722 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 processing appconfig overrides Jan 30 14:00:43.405523 polkitd[2135]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 14:00:43.408159 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:00:43.415284 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.415284 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.415284 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 processing appconfig overrides Jan 30 14:00:43.415284 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.415284 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.415284 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 processing appconfig overrides Jan 30 14:00:43.414636 polkitd[2135]: Finished loading, compiling and executing 2 rules Jan 30 14:00:43.417729 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 14:00:43.419478 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO Proxy environment variables: Jan 30 14:00:43.418017 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 14:00:43.422702 polkitd[2135]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 14:00:43.428816 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.428816 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:43.432656 amazon-ssm-agent[2088]: 2025/01/30 14:00:43 processing appconfig overrides Jan 30 14:00:43.433215 coreos-metadata[2074]: Jan 30 14:00:43.433 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:00:43.436067 coreos-metadata[2074]: Jan 30 14:00:43.435 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 14:00:43.446724 coreos-metadata[2074]: Jan 30 14:00:43.445 INFO Fetch successful Jan 30 14:00:43.446724 coreos-metadata[2074]: Jan 30 14:00:43.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 14:00:43.449458 coreos-metadata[2074]: Jan 30 14:00:43.447 INFO Fetch successful Jan 30 14:00:43.456868 unknown[2074]: wrote ssh authorized keys file for user: core Jan 30 14:00:43.472509 containerd[2008]: time="2025-01-30T14:00:43.469018294Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:00:43.518151 systemd-hostnamed[2012]: Hostname set to (transient) Jan 30 14:00:43.518922 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO https_proxy: Jan 30 14:00:43.520432 systemd-resolved[1918]: System hostname changed to 'ip-172-31-25-132'. Jan 30 14:00:43.533081 update-ssh-keys[2180]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:00:43.537439 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:00:43.551422 systemd[1]: Finished sshkeys.service. Jan 30 14:00:43.629169 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO http_proxy: Jan 30 14:00:43.681994 containerd[2008]: time="2025-01-30T14:00:43.681794231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.694125 containerd[2008]: time="2025-01-30T14:00:43.694055987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.694318 containerd[2008]: time="2025-01-30T14:00:43.694270247Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:00:43.694435 containerd[2008]: time="2025-01-30T14:00:43.694407179Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.696081851Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.698401475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.698689319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.698747663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.699190631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.699249947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.699287015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.699588 containerd[2008]: time="2025-01-30T14:00:43.699461591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.700285 containerd[2008]: time="2025-01-30T14:00:43.700224695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.701417 containerd[2008]: time="2025-01-30T14:00:43.701246831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.702868 containerd[2008]: time="2025-01-30T14:00:43.702407507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.702868 containerd[2008]: time="2025-01-30T14:00:43.702480347Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:00:43.702868 containerd[2008]: time="2025-01-30T14:00:43.702816491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:00:43.703201 containerd[2008]: time="2025-01-30T14:00:43.703171811Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.716245211Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.718521911Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.718596983Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.718633427Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.718669427Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.718931555Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719352503Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719561171Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719598707Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719634311Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719667587Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719705915Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719739023Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720339 containerd[2008]: time="2025-01-30T14:00:43.719770871Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.719815751Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.719846807Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.719875703Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.719903543Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.719949203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.719980871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720010523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720041231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720073547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720104975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720132983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720163319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720193403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.720986 containerd[2008]: time="2025-01-30T14:00:43.720226163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.721570 containerd[2008]: time="2025-01-30T14:00:43.720254123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.720282911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.725573075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.725705015Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.725762051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.725819435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.725848343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.726068723Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.726260591Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.726292727Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.726547727Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:00:43.726765 containerd[2008]: time="2025-01-30T14:00:43.726684803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.729196 containerd[2008]: time="2025-01-30T14:00:43.726721307Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:00:43.729196 containerd[2008]: time="2025-01-30T14:00:43.727028855Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:00:43.729196 containerd[2008]: time="2025-01-30T14:00:43.727058423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.732650 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO no_proxy: Jan 30 14:00:43.732860 containerd[2008]: time="2025-01-30T14:00:43.731021807Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:00:43.734385 containerd[2008]: time="2025-01-30T14:00:43.733168775Z" level=info msg="Connect containerd service" Jan 30 14:00:43.738755 containerd[2008]: time="2025-01-30T14:00:43.734755295Z" level=info msg="using legacy CRI server" Jan 30 14:00:43.738755 containerd[2008]: time="2025-01-30T14:00:43.734810399Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:00:43.738755 containerd[2008]: time="2025-01-30T14:00:43.735003419Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:00:43.740768 containerd[2008]: time="2025-01-30T14:00:43.740704379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:00:43.744538 containerd[2008]: time="2025-01-30T14:00:43.744392483Z" level=info msg="Start subscribing containerd event" Jan 30 14:00:43.744538 containerd[2008]: time="2025-01-30T14:00:43.744507647Z" level=info msg="Start recovering state" Jan 30 14:00:43.744744 containerd[2008]: time="2025-01-30T14:00:43.744647291Z" level=info msg="Start event monitor" Jan 30 14:00:43.744744 containerd[2008]: time="2025-01-30T14:00:43.744673343Z" level=info msg="Start snapshots syncer" Jan 30 14:00:43.744744 containerd[2008]: time="2025-01-30T14:00:43.744695387Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:00:43.744744 containerd[2008]: time="2025-01-30T14:00:43.744715055Z" level=info msg="Start streaming server" Jan 30 14:00:43.747394 containerd[2008]: time="2025-01-30T14:00:43.746946311Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:00:43.747672 containerd[2008]: time="2025-01-30T14:00:43.747615407Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:00:43.771336 containerd[2008]: time="2025-01-30T14:00:43.765985235Z" level=info msg="containerd successfully booted in 0.301688s" Jan 30 14:00:43.768722 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:00:43.832450 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO Checking if agent identity type OnPrem can be assumed Jan 30 14:00:43.930936 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO Checking if agent identity type EC2 can be assumed Jan 30 14:00:44.030951 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO Agent will take identity from EC2 Jan 30 14:00:44.131376 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:00:44.239570 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:00:44.341403 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:00:44.441330 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 14:00:44.544203 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 30 14:00:44.564104 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 14:00:44.564104 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 14:00:44.564262 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [Registrar] Starting registrar module Jan 30 14:00:44.564262 amazon-ssm-agent[2088]: 2025-01-30 14:00:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 14:00:44.564262 amazon-ssm-agent[2088]: 2025-01-30 14:00:44 INFO [EC2Identity] EC2 registration was successful. Jan 30 14:00:44.564262 amazon-ssm-agent[2088]: 2025-01-30 14:00:44 INFO [CredentialRefresher] credentialRefresher has started Jan 30 14:00:44.564262 amazon-ssm-agent[2088]: 2025-01-30 14:00:44 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 14:00:44.564262 amazon-ssm-agent[2088]: 2025-01-30 14:00:44 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 14:00:44.613288 sshd_keygen[2023]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:00:44.646329 amazon-ssm-agent[2088]: 2025-01-30 14:00:44 INFO [CredentialRefresher] Next credential rotation will be in 30.5999912577 minutes Jan 30 14:00:44.667418 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:00:44.689745 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:00:44.697839 systemd[1]: Started sshd@0-172.31.25.132:22-139.178.89.65:41850.service - OpenSSH per-connection server daemon (139.178.89.65:41850). Jan 30 14:00:44.724884 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:00:44.725379 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:00:44.744716 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:00:44.783847 tar[2007]: linux-arm64/README.md Jan 30 14:00:44.803451 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:00:44.812539 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:00:44.830598 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:00:44.844013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:00:44.847010 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:00:44.932410 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 41850 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:44.936690 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:44.957631 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:00:44.968845 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:00:44.979895 systemd-logind[1994]: New session 1 of user core. Jan 30 14:00:45.006498 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:00:45.019880 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:00:45.039918 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:00:45.287883 systemd[2232]: Queued start job for default target default.target. Jan 30 14:00:45.293835 systemd[2232]: Created slice app.slice - User Application Slice. Jan 30 14:00:45.293905 systemd[2232]: Reached target paths.target - Paths. Jan 30 14:00:45.293940 systemd[2232]: Reached target timers.target - Timers. Jan 30 14:00:45.296852 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:00:45.334621 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:00:45.334939 systemd[2232]: Reached target sockets.target - Sockets. Jan 30 14:00:45.334978 systemd[2232]: Reached target basic.target - Basic System. Jan 30 14:00:45.335064 systemd[2232]: Reached target default.target - Main User Target. Jan 30 14:00:45.335131 systemd[2232]: Startup finished in 282ms. Jan 30 14:00:45.335345 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:00:45.350687 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:00:45.379002 ntpd[1989]: Listen normally on 6 eth0 [fe80::4ea:adff:fe02:7e19%2]:123 Jan 30 14:00:45.380461 ntpd[1989]: 30 Jan 14:00:45 ntpd[1989]: Listen normally on 6 eth0 [fe80::4ea:adff:fe02:7e19%2]:123 Jan 30 14:00:45.519205 systemd[1]: Started sshd@1-172.31.25.132:22-139.178.89.65:42128.service - OpenSSH per-connection server daemon (139.178.89.65:42128). Jan 30 14:00:45.599034 amazon-ssm-agent[2088]: 2025-01-30 14:00:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 14:00:45.699415 amazon-ssm-agent[2088]: 2025-01-30 14:00:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2246) started Jan 30 14:00:45.719371 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 42128 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:45.726280 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:45.743417 systemd-logind[1994]: New session 2 of user core. Jan 30 14:00:45.748641 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:00:45.800598 amazon-ssm-agent[2088]: 2025-01-30 14:00:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 14:00:45.884113 sshd[2243]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:45.892720 systemd[1]: sshd@1-172.31.25.132:22-139.178.89.65:42128.service: Deactivated successfully. Jan 30 14:00:45.897581 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:00:45.901640 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:00:45.928456 systemd-logind[1994]: Removed session 2. Jan 30 14:00:45.942551 systemd[1]: Started sshd@2-172.31.25.132:22-139.178.89.65:42130.service - OpenSSH per-connection server daemon (139.178.89.65:42130). Jan 30 14:00:45.950695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:45.959021 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:00:45.965485 systemd[1]: Startup finished in 1.138s (kernel) + 8.555s (initrd) + 8.661s (userspace) = 18.355s. Jan 30 14:00:45.983792 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:46.126391 sshd[2266]: Accepted publickey for core from 139.178.89.65 port 42130 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:46.129090 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:46.140340 systemd-logind[1994]: New session 3 of user core. Jan 30 14:00:46.151665 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:00:46.284786 sshd[2266]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:46.293487 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:00:46.294423 systemd[1]: sshd@2-172.31.25.132:22-139.178.89.65:42130.service: Deactivated successfully. Jan 30 14:00:46.298043 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:00:46.300258 systemd-logind[1994]: Removed session 3. Jan 30 14:00:47.135177 kubelet[2267]: E0130 14:00:47.135075 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:47.139836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:47.140187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:47.141532 systemd[1]: kubelet.service: Consumed 1.356s CPU time. Jan 30 14:00:56.332864 systemd[1]: Started sshd@3-172.31.25.132:22-139.178.89.65:34702.service - OpenSSH per-connection server daemon (139.178.89.65:34702). Jan 30 14:00:56.501490 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 34702 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:56.504420 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:56.514126 systemd-logind[1994]: New session 4 of user core. Jan 30 14:00:56.524645 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:00:56.652287 sshd[2285]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:56.658231 systemd[1]: sshd@3-172.31.25.132:22-139.178.89.65:34702.service: Deactivated successfully. Jan 30 14:00:56.661752 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:00:56.665289 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:00:56.667435 systemd-logind[1994]: Removed session 4. Jan 30 14:00:56.694875 systemd[1]: Started sshd@4-172.31.25.132:22-139.178.89.65:34706.service - OpenSSH per-connection server daemon (139.178.89.65:34706). Jan 30 14:00:56.876081 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 34706 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:56.879019 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:56.887875 systemd-logind[1994]: New session 5 of user core. Jan 30 14:00:56.899641 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:00:57.020114 sshd[2292]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:57.025959 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:00:57.026864 systemd[1]: sshd@4-172.31.25.132:22-139.178.89.65:34706.service: Deactivated successfully. Jan 30 14:00:57.030878 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:00:57.036075 systemd-logind[1994]: Removed session 5. Jan 30 14:00:57.058838 systemd[1]: Started sshd@5-172.31.25.132:22-139.178.89.65:34718.service - OpenSSH per-connection server daemon (139.178.89.65:34718). Jan 30 14:00:57.145907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:00:57.157979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:57.236641 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 34718 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:57.239800 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:57.249400 systemd-logind[1994]: New session 6 of user core. Jan 30 14:00:57.254603 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:00:57.389645 sshd[2299]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:57.397167 systemd[1]: sshd@5-172.31.25.132:22-139.178.89.65:34718.service: Deactivated successfully. Jan 30 14:00:57.397262 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:00:57.405191 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:00:57.430715 systemd-logind[1994]: Removed session 6. Jan 30 14:00:57.435842 systemd[1]: Started sshd@6-172.31.25.132:22-139.178.89.65:34728.service - OpenSSH per-connection server daemon (139.178.89.65:34728). Jan 30 14:00:57.499142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:57.508263 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:57.602283 kubelet[2316]: E0130 14:00:57.602183 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:57.609763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:57.610271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:57.627162 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 34728 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:57.630559 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:57.640508 systemd-logind[1994]: New session 7 of user core. Jan 30 14:00:57.648646 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:00:57.765615 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:00:57.766346 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:57.785966 sudo[2325]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:57.809675 sshd[2309]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:57.815194 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:00:57.816682 systemd[1]: sshd@6-172.31.25.132:22-139.178.89.65:34728.service: Deactivated successfully. Jan 30 14:00:57.821024 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:00:57.824541 systemd-logind[1994]: Removed session 7. Jan 30 14:00:57.844578 systemd[1]: Started sshd@7-172.31.25.132:22-139.178.89.65:34744.service - OpenSSH per-connection server daemon (139.178.89.65:34744). Jan 30 14:00:58.031423 sshd[2330]: Accepted publickey for core from 139.178.89.65 port 34744 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:58.034291 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:58.044656 systemd-logind[1994]: New session 8 of user core. Jan 30 14:00:58.051636 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:00:58.158241 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:00:58.159059 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:58.165359 sudo[2334]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:58.175773 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:00:58.176445 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:58.196812 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:58.212316 auditctl[2337]: No rules Jan 30 14:00:58.213107 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:00:58.215390 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:58.226041 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:58.273681 augenrules[2355]: No rules Jan 30 14:00:58.276051 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:58.278608 sudo[2333]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:58.302676 sshd[2330]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:58.307391 systemd[1]: sshd@7-172.31.25.132:22-139.178.89.65:34744.service: Deactivated successfully. Jan 30 14:00:58.310445 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:00:58.313611 systemd-logind[1994]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:00:58.315270 systemd-logind[1994]: Removed session 8. Jan 30 14:00:58.343003 systemd[1]: Started sshd@8-172.31.25.132:22-139.178.89.65:34752.service - OpenSSH per-connection server daemon (139.178.89.65:34752). Jan 30 14:00:58.505344 sshd[2363]: Accepted publickey for core from 139.178.89.65 port 34752 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:58.507959 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:58.515487 systemd-logind[1994]: New session 9 of user core. Jan 30 14:00:58.527564 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:00:58.631443 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:00:58.632147 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:59.077733 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:00:59.078030 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:00:59.451344 dockerd[2382]: time="2025-01-30T14:00:59.451238978Z" level=info msg="Starting up" Jan 30 14:00:59.593766 dockerd[2382]: time="2025-01-30T14:00:59.593339185Z" level=info msg="Loading containers: start." Jan 30 14:00:59.758517 kernel: Initializing XFRM netlink socket Jan 30 14:00:59.792166 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:00:59.881262 systemd-networkd[1917]: docker0: Link UP Jan 30 14:00:59.909993 dockerd[2382]: time="2025-01-30T14:00:59.909910934Z" level=info msg="Loading containers: done." Jan 30 14:00:59.941063 dockerd[2382]: time="2025-01-30T14:00:59.940949230Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:00:59.941407 dockerd[2382]: time="2025-01-30T14:00:59.941170849Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:00:59.942145 dockerd[2382]: time="2025-01-30T14:00:59.941551343Z" level=info msg="Daemon has completed initialization" Jan 30 14:01:00.004087 dockerd[2382]: time="2025-01-30T14:01:00.003954533Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:01:00.004734 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:01:01.156032 containerd[2008]: time="2025-01-30T14:01:01.155944240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 14:01:01.788796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851559888.mount: Deactivated successfully. Jan 30 14:01:03.253351 containerd[2008]: time="2025-01-30T14:01:03.252437582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:03.255787 containerd[2008]: time="2025-01-30T14:01:03.255716804Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220948" Jan 30 14:01:03.257939 containerd[2008]: time="2025-01-30T14:01:03.257867595Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:03.262913 containerd[2008]: time="2025-01-30T14:01:03.262803718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:03.265343 containerd[2008]: time="2025-01-30T14:01:03.265258969Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 2.109220025s" Jan 30 14:01:03.265476 containerd[2008]: time="2025-01-30T14:01:03.265346997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 30 14:01:03.266684 containerd[2008]: time="2025-01-30T14:01:03.266392143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 14:01:04.723151 containerd[2008]: time="2025-01-30T14:01:04.723074623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:04.727353 containerd[2008]: time="2025-01-30T14:01:04.726838720Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:04.727353 containerd[2008]: time="2025-01-30T14:01:04.726973427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527107" Jan 30 14:01:04.735602 containerd[2008]: time="2025-01-30T14:01:04.735530344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:04.738202 containerd[2008]: time="2025-01-30T14:01:04.738128466Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.471669101s" Jan 30 14:01:04.738202 containerd[2008]: time="2025-01-30T14:01:04.738195388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 30 14:01:04.739282 containerd[2008]: time="2025-01-30T14:01:04.739222477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 14:01:06.071378 containerd[2008]: time="2025-01-30T14:01:06.070264687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:06.072515 containerd[2008]: time="2025-01-30T14:01:06.072353190Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481113" Jan 30 14:01:06.073500 containerd[2008]: time="2025-01-30T14:01:06.073432110Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:06.079435 containerd[2008]: time="2025-01-30T14:01:06.079337333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:06.081922 containerd[2008]: time="2025-01-30T14:01:06.081744200Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.34245952s" Jan 30 14:01:06.081922 containerd[2008]: time="2025-01-30T14:01:06.081797638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 30 14:01:06.082945 containerd[2008]: time="2025-01-30T14:01:06.082892790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 14:01:07.295270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843353724.mount: Deactivated successfully. Jan 30 14:01:07.646725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:01:07.658156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:08.026692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:08.035385 containerd[2008]: time="2025-01-30T14:01:08.035108263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:08.037041 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:08.040468 containerd[2008]: time="2025-01-30T14:01:08.039323390Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364397" Jan 30 14:01:08.040468 containerd[2008]: time="2025-01-30T14:01:08.039447964Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:08.044764 containerd[2008]: time="2025-01-30T14:01:08.044677741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:08.047601 containerd[2008]: time="2025-01-30T14:01:08.047480422Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.964378801s" Jan 30 14:01:08.047833 containerd[2008]: time="2025-01-30T14:01:08.047798785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 30 14:01:08.048895 containerd[2008]: time="2025-01-30T14:01:08.048828396Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 14:01:08.121589 kubelet[2599]: E0130 14:01:08.121469 2599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:08.125985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:08.126405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:08.614565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104175991.mount: Deactivated successfully. Jan 30 14:01:09.756693 containerd[2008]: time="2025-01-30T14:01:09.756612117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:09.758440 containerd[2008]: time="2025-01-30T14:01:09.758364957Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 30 14:01:09.759365 containerd[2008]: time="2025-01-30T14:01:09.759250808Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:09.765321 containerd[2008]: time="2025-01-30T14:01:09.765234874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:09.769917 containerd[2008]: time="2025-01-30T14:01:09.769867929Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.720726993s" Jan 30 14:01:09.770379 containerd[2008]: time="2025-01-30T14:01:09.770065560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 30 14:01:09.771203 containerd[2008]: time="2025-01-30T14:01:09.771158418Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:01:10.234945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3403225352.mount: Deactivated successfully. Jan 30 14:01:10.241805 containerd[2008]: time="2025-01-30T14:01:10.241725356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:10.243410 containerd[2008]: time="2025-01-30T14:01:10.243342360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 30 14:01:10.244581 containerd[2008]: time="2025-01-30T14:01:10.244512284Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:10.249743 containerd[2008]: time="2025-01-30T14:01:10.249650708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:10.251945 containerd[2008]: time="2025-01-30T14:01:10.251248551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 480.035085ms" Jan 30 14:01:10.251945 containerd[2008]: time="2025-01-30T14:01:10.251329471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 14:01:10.252858 containerd[2008]: time="2025-01-30T14:01:10.252562608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 14:01:10.849886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006146748.mount: Deactivated successfully. Jan 30 14:01:12.996372 containerd[2008]: time="2025-01-30T14:01:12.996040900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:12.998922 containerd[2008]: time="2025-01-30T14:01:12.998849668Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Jan 30 14:01:13.001494 containerd[2008]: time="2025-01-30T14:01:13.001407438Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:13.008457 containerd[2008]: time="2025-01-30T14:01:13.008355455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:13.011333 containerd[2008]: time="2025-01-30T14:01:13.011077839Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.75846399s" Jan 30 14:01:13.011333 containerd[2008]: time="2025-01-30T14:01:13.011143020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 30 14:01:13.554023 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 14:01:18.145970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:01:18.156831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:18.506790 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:01:18.506967 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:01:18.507853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:18.527875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:18.584248 systemd[1]: Reloading requested from client PID 2754 ('systemctl') (unit session-9.scope)... Jan 30 14:01:18.584280 systemd[1]: Reloading... Jan 30 14:01:18.804333 zram_generator::config[2797]: No configuration found. Jan 30 14:01:19.048701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:19.215967 systemd[1]: Reloading finished in 631 ms. Jan 30 14:01:19.315123 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:01:19.315394 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:01:19.315991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:19.322854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:19.623956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:19.640831 (kubelet)[2858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:01:19.716185 kubelet[2858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:19.716185 kubelet[2858]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:01:19.716185 kubelet[2858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:19.716755 kubelet[2858]: I0130 14:01:19.716287 2858 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:01:21.310288 kubelet[2858]: I0130 14:01:21.310235 2858 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:01:21.310288 kubelet[2858]: I0130 14:01:21.310477 2858 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:01:21.310288 kubelet[2858]: I0130 14:01:21.310921 2858 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:01:21.350385 kubelet[2858]: E0130 14:01:21.350291 2858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:21.353770 kubelet[2858]: I0130 14:01:21.353501 2858 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:01:21.363925 kubelet[2858]: E0130 14:01:21.363875 2858 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:01:21.363925 kubelet[2858]: I0130 14:01:21.363924 2858 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:01:21.373345 kubelet[2858]: I0130 14:01:21.372546 2858 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:01:21.373345 kubelet[2858]: I0130 14:01:21.373016 2858 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:01:21.373875 kubelet[2858]: I0130 14:01:21.373061 2858 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-132","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:01:21.374170 kubelet[2858]: I0130 14:01:21.374126 2858 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:01:21.374294 kubelet[2858]: I0130 14:01:21.374275 2858 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:01:21.374689 kubelet[2858]: I0130 14:01:21.374663 2858 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:21.382141 kubelet[2858]: I0130 14:01:21.382100 2858 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:01:21.382390 kubelet[2858]: I0130 14:01:21.382368 2858 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:01:21.382522 kubelet[2858]: I0130 14:01:21.382504 2858 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:01:21.382633 kubelet[2858]: I0130 14:01:21.382614 2858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:01:21.388537 kubelet[2858]: W0130 14:01:21.388414 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-132&limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:21.388703 kubelet[2858]: E0130 14:01:21.388576 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-132&limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:21.389168 kubelet[2858]: W0130 14:01:21.388819 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:21.389168 kubelet[2858]: E0130 14:01:21.388947 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:21.391372 kubelet[2858]: I0130 14:01:21.389441 2858 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:01:21.391372 kubelet[2858]: I0130 14:01:21.390746 2858 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:01:21.391372 kubelet[2858]: W0130 14:01:21.390923 2858 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:01:21.392927 kubelet[2858]: I0130 14:01:21.392876 2858 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:01:21.393055 kubelet[2858]: I0130 14:01:21.392962 2858 server.go:1287] "Started kubelet" Jan 30 14:01:21.397157 kubelet[2858]: I0130 14:01:21.397094 2858 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:01:21.398979 kubelet[2858]: I0130 14:01:21.398943 2858 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:01:21.400160 kubelet[2858]: I0130 14:01:21.400056 2858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:01:21.400567 kubelet[2858]: I0130 14:01:21.400525 2858 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:01:21.400998 kubelet[2858]: E0130 14:01:21.400778 2858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.132:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-132.181f7d3cd3c7f7a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-132,UID:ip-172-31-25-132,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-132,},FirstTimestamp:2025-01-30 14:01:21.392908198 +0000 UTC m=+1.745927384,LastTimestamp:2025-01-30 14:01:21.392908198 +0000 UTC m=+1.745927384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-132,}" Jan 30 14:01:21.406502 kubelet[2858]: E0130 14:01:21.406443 2858 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:01:21.407558 kubelet[2858]: I0130 14:01:21.407505 2858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:01:21.410874 kubelet[2858]: I0130 14:01:21.410818 2858 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:01:21.412115 kubelet[2858]: I0130 14:01:21.412061 2858 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:01:21.412256 kubelet[2858]: I0130 14:01:21.412244 2858 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:01:21.412363 kubelet[2858]: I0130 14:01:21.412349 2858 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:01:21.413661 kubelet[2858]: W0130 14:01:21.412903 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:21.413661 kubelet[2858]: E0130 14:01:21.412992 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:21.413661 kubelet[2858]: E0130 14:01:21.413369 2858 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-132\" not found" Jan 30 14:01:21.413661 kubelet[2858]: E0130 14:01:21.413563 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-132?timeout=10s\": dial tcp 172.31.25.132:6443: connect: connection refused" interval="200ms" Jan 30 14:01:21.417388 kubelet[2858]: I0130 14:01:21.416988 2858 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:01:21.423188 kubelet[2858]: I0130 14:01:21.423145 2858 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:01:21.424342 kubelet[2858]: I0130 14:01:21.423399 2858 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:01:21.447763 kubelet[2858]: I0130 14:01:21.447523 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:01:21.449896 kubelet[2858]: I0130 14:01:21.449856 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:01:21.450550 kubelet[2858]: I0130 14:01:21.450027 2858 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:01:21.450550 kubelet[2858]: I0130 14:01:21.450063 2858 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:01:21.450550 kubelet[2858]: I0130 14:01:21.450082 2858 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:01:21.450550 kubelet[2858]: E0130 14:01:21.450152 2858 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:01:21.455407 kubelet[2858]: W0130 14:01:21.455077 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:21.455407 kubelet[2858]: E0130 14:01:21.455158 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:21.473371 kubelet[2858]: I0130 14:01:21.473286 2858 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:01:21.473371 kubelet[2858]: I0130 14:01:21.473356 2858 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:01:21.473580 kubelet[2858]: I0130 14:01:21.473393 2858 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:21.480356 kubelet[2858]: I0130 14:01:21.479906 2858 policy_none.go:49] "None policy: Start" Jan 30 14:01:21.480356 kubelet[2858]: I0130 14:01:21.479957 2858 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:01:21.480356 kubelet[2858]: I0130 14:01:21.479994 2858 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:01:21.490112 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:01:21.503515 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:01:21.512592 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:01:21.513917 kubelet[2858]: E0130 14:01:21.513843 2858 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-132\" not found" Jan 30 14:01:21.523005 kubelet[2858]: I0130 14:01:21.521833 2858 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:01:21.523005 kubelet[2858]: I0130 14:01:21.522122 2858 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:01:21.523005 kubelet[2858]: I0130 14:01:21.522145 2858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:01:21.523005 kubelet[2858]: I0130 14:01:21.522558 2858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:01:21.524764 kubelet[2858]: E0130 14:01:21.524707 2858 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:01:21.524877 kubelet[2858]: E0130 14:01:21.524798 2858 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-132\" not found" Jan 30 14:01:21.569628 systemd[1]: Created slice kubepods-burstable-podc9aa4c00e85b9e5ecf1b35b2913feaac.slice - libcontainer container kubepods-burstable-podc9aa4c00e85b9e5ecf1b35b2913feaac.slice. Jan 30 14:01:21.588509 kubelet[2858]: E0130 14:01:21.588380 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:21.594559 systemd[1]: Created slice kubepods-burstable-pod46e34c6e5f2190805aa2e647797adfa7.slice - libcontainer container kubepods-burstable-pod46e34c6e5f2190805aa2e647797adfa7.slice. Jan 30 14:01:21.608917 kubelet[2858]: E0130 14:01:21.608879 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:21.613505 kubelet[2858]: I0130 14:01:21.613468 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46e34c6e5f2190805aa2e647797adfa7-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-132\" (UID: \"46e34c6e5f2190805aa2e647797adfa7\") " pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:21.614133 kubelet[2858]: I0130 14:01:21.614095 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:21.614366 kubelet[2858]: I0130 14:01:21.614340 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:21.614500 kubelet[2858]: I0130 14:01:21.614474 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:21.614653 kubelet[2858]: I0130 14:01:21.614629 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f14e6ea138f82918d51bc0084d1a6b1-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-132\" (UID: \"8f14e6ea138f82918d51bc0084d1a6b1\") " pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:21.615416 kubelet[2858]: I0130 14:01:21.615164 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46e34c6e5f2190805aa2e647797adfa7-ca-certs\") pod \"kube-apiserver-ip-172-31-25-132\" (UID: \"46e34c6e5f2190805aa2e647797adfa7\") " pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:21.615416 kubelet[2858]: I0130 14:01:21.615217 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46e34c6e5f2190805aa2e647797adfa7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-132\" (UID: \"46e34c6e5f2190805aa2e647797adfa7\") " pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:21.615416 kubelet[2858]: I0130 14:01:21.615261 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:21.615416 kubelet[2858]: I0130 14:01:21.615329 2858 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:21.615416 kubelet[2858]: E0130 14:01:21.614993 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-132?timeout=10s\": dial tcp 172.31.25.132:6443: connect: connection refused" interval="400ms" Jan 30 14:01:21.616594 systemd[1]: Created slice kubepods-burstable-pod8f14e6ea138f82918d51bc0084d1a6b1.slice - libcontainer container kubepods-burstable-pod8f14e6ea138f82918d51bc0084d1a6b1.slice. Jan 30 14:01:21.620241 kubelet[2858]: E0130 14:01:21.620197 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:21.625134 kubelet[2858]: I0130 14:01:21.625051 2858 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-132" Jan 30 14:01:21.625681 kubelet[2858]: E0130 14:01:21.625633 2858 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.132:6443/api/v1/nodes\": dial tcp 172.31.25.132:6443: connect: connection refused" node="ip-172-31-25-132" Jan 30 14:01:21.829171 kubelet[2858]: I0130 14:01:21.828597 2858 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-132" Jan 30 14:01:21.829171 kubelet[2858]: E0130 14:01:21.829036 2858 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.132:6443/api/v1/nodes\": dial tcp 172.31.25.132:6443: connect: connection refused" node="ip-172-31-25-132" Jan 30 14:01:21.890860 containerd[2008]: time="2025-01-30T14:01:21.890723965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-132,Uid:c9aa4c00e85b9e5ecf1b35b2913feaac,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:21.911171 containerd[2008]: time="2025-01-30T14:01:21.910823018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-132,Uid:46e34c6e5f2190805aa2e647797adfa7,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:21.928200 containerd[2008]: time="2025-01-30T14:01:21.928097587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-132,Uid:8f14e6ea138f82918d51bc0084d1a6b1,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:22.016687 kubelet[2858]: E0130 14:01:22.016628 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-132?timeout=10s\": dial tcp 172.31.25.132:6443: connect: connection refused" interval="800ms" Jan 30 14:01:22.231824 kubelet[2858]: I0130 14:01:22.231768 2858 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-132" Jan 30 14:01:22.232421 kubelet[2858]: E0130 14:01:22.232361 2858 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.132:6443/api/v1/nodes\": dial tcp 172.31.25.132:6443: connect: connection refused" node="ip-172-31-25-132" Jan 30 14:01:22.374936 kubelet[2858]: W0130 14:01:22.374837 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:22.375517 kubelet[2858]: E0130 14:01:22.374938 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:22.433640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903873087.mount: Deactivated successfully. Jan 30 14:01:22.450870 containerd[2008]: time="2025-01-30T14:01:22.450707341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:22.452887 containerd[2008]: time="2025-01-30T14:01:22.452815078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:22.455052 containerd[2008]: time="2025-01-30T14:01:22.454994071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:01:22.456805 containerd[2008]: time="2025-01-30T14:01:22.456743850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:01:22.458917 containerd[2008]: time="2025-01-30T14:01:22.458866606Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:22.461922 containerd[2008]: time="2025-01-30T14:01:22.461654484Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:22.463588 containerd[2008]: time="2025-01-30T14:01:22.463210449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:01:22.467759 containerd[2008]: time="2025-01-30T14:01:22.467672023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:22.472140 containerd[2008]: time="2025-01-30T14:01:22.471853461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.924874ms" Jan 30 14:01:22.476217 containerd[2008]: time="2025-01-30T14:01:22.476135449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.303994ms" Jan 30 14:01:22.491662 containerd[2008]: time="2025-01-30T14:01:22.491392998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.148422ms" Jan 30 14:01:22.541216 kubelet[2858]: W0130 14:01:22.540919 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:22.541216 kubelet[2858]: E0130 14:01:22.541033 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:22.681578 containerd[2008]: time="2025-01-30T14:01:22.681434448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:22.681878 containerd[2008]: time="2025-01-30T14:01:22.681806214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:22.683085 containerd[2008]: time="2025-01-30T14:01:22.682482368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.685739 containerd[2008]: time="2025-01-30T14:01:22.685453025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.687069 containerd[2008]: time="2025-01-30T14:01:22.686625759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:22.687069 containerd[2008]: time="2025-01-30T14:01:22.686716812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:22.687069 containerd[2008]: time="2025-01-30T14:01:22.686769507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.687069 containerd[2008]: time="2025-01-30T14:01:22.686966573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.689665 containerd[2008]: time="2025-01-30T14:01:22.689109920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:22.689665 containerd[2008]: time="2025-01-30T14:01:22.689214721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:22.689665 containerd[2008]: time="2025-01-30T14:01:22.689263309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.689665 containerd[2008]: time="2025-01-30T14:01:22.689456281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.741643 systemd[1]: Started cri-containerd-176a22fddd146a1ad8eec50e29984ec6ff9f96fad48bf231a86b13aff7afee76.scope - libcontainer container 176a22fddd146a1ad8eec50e29984ec6ff9f96fad48bf231a86b13aff7afee76. Jan 30 14:01:22.746471 systemd[1]: Started cri-containerd-e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628.scope - libcontainer container e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628. Jan 30 14:01:22.757200 systemd[1]: Started cri-containerd-eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1.scope - libcontainer container eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1. Jan 30 14:01:22.819114 kubelet[2858]: E0130 14:01:22.817612 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-132?timeout=10s\": dial tcp 172.31.25.132:6443: connect: connection refused" interval="1.6s" Jan 30 14:01:22.858001 containerd[2008]: time="2025-01-30T14:01:22.857618838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-132,Uid:46e34c6e5f2190805aa2e647797adfa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"176a22fddd146a1ad8eec50e29984ec6ff9f96fad48bf231a86b13aff7afee76\"" Jan 30 14:01:22.866966 containerd[2008]: time="2025-01-30T14:01:22.866881479Z" level=info msg="CreateContainer within sandbox \"176a22fddd146a1ad8eec50e29984ec6ff9f96fad48bf231a86b13aff7afee76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:01:22.878376 containerd[2008]: time="2025-01-30T14:01:22.877972009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-132,Uid:c9aa4c00e85b9e5ecf1b35b2913feaac,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1\"" Jan 30 14:01:22.885252 containerd[2008]: time="2025-01-30T14:01:22.885168262Z" level=info msg="CreateContainer within sandbox \"eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:01:22.888341 containerd[2008]: time="2025-01-30T14:01:22.887844315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-132,Uid:8f14e6ea138f82918d51bc0084d1a6b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628\"" Jan 30 14:01:22.893382 containerd[2008]: time="2025-01-30T14:01:22.893180094Z" level=info msg="CreateContainer within sandbox \"e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:01:22.911842 containerd[2008]: time="2025-01-30T14:01:22.911579169Z" level=info msg="CreateContainer within sandbox \"176a22fddd146a1ad8eec50e29984ec6ff9f96fad48bf231a86b13aff7afee76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"324a8928bab407026329989113563b1dd1f75b1633703ba666b0cd15957e55ef\"" Jan 30 14:01:22.912688 containerd[2008]: time="2025-01-30T14:01:22.912565546Z" level=info msg="StartContainer for \"324a8928bab407026329989113563b1dd1f75b1633703ba666b0cd15957e55ef\"" Jan 30 14:01:22.929510 kubelet[2858]: W0130 14:01:22.929342 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-132&limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:22.929735 kubelet[2858]: E0130 14:01:22.929468 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-132&limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:22.930215 containerd[2008]: time="2025-01-30T14:01:22.930149581Z" level=info msg="CreateContainer within sandbox \"eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345\"" Jan 30 14:01:22.932347 containerd[2008]: time="2025-01-30T14:01:22.931753427Z" level=info msg="StartContainer for \"44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345\"" Jan 30 14:01:22.939207 containerd[2008]: time="2025-01-30T14:01:22.939137381Z" level=info msg="CreateContainer within sandbox \"e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019\"" Jan 30 14:01:22.941937 containerd[2008]: time="2025-01-30T14:01:22.941866117Z" level=info msg="StartContainer for \"d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019\"" Jan 30 14:01:22.982869 systemd[1]: Started cri-containerd-324a8928bab407026329989113563b1dd1f75b1633703ba666b0cd15957e55ef.scope - libcontainer container 324a8928bab407026329989113563b1dd1f75b1633703ba666b0cd15957e55ef. Jan 30 14:01:23.003620 kubelet[2858]: W0130 14:01:23.003200 2858 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.132:6443: connect: connection refused Jan 30 14:01:23.005217 kubelet[2858]: E0130 14:01:23.004394 2858 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.132:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:23.006636 systemd[1]: Started cri-containerd-44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345.scope - libcontainer container 44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345. Jan 30 14:01:23.036123 kubelet[2858]: I0130 14:01:23.035952 2858 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-132" Jan 30 14:01:23.036977 kubelet[2858]: E0130 14:01:23.036453 2858 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.132:6443/api/v1/nodes\": dial tcp 172.31.25.132:6443: connect: connection refused" node="ip-172-31-25-132" Jan 30 14:01:23.056589 systemd[1]: Started cri-containerd-d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019.scope - libcontainer container d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019. Jan 30 14:01:23.112121 containerd[2008]: time="2025-01-30T14:01:23.112049727Z" level=info msg="StartContainer for \"324a8928bab407026329989113563b1dd1f75b1633703ba666b0cd15957e55ef\" returns successfully" Jan 30 14:01:23.147197 containerd[2008]: time="2025-01-30T14:01:23.147128546Z" level=info msg="StartContainer for \"44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345\" returns successfully" Jan 30 14:01:23.189465 containerd[2008]: time="2025-01-30T14:01:23.189384744Z" level=info msg="StartContainer for \"d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019\" returns successfully" Jan 30 14:01:23.479170 kubelet[2858]: E0130 14:01:23.479118 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:23.497060 kubelet[2858]: E0130 14:01:23.497000 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:23.519451 kubelet[2858]: E0130 14:01:23.519401 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:24.517877 kubelet[2858]: E0130 14:01:24.517826 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:24.518702 kubelet[2858]: E0130 14:01:24.518665 2858 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-132\" not found" node="ip-172-31-25-132" Jan 30 14:01:24.639189 kubelet[2858]: I0130 14:01:24.638757 2858 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-132" Jan 30 14:01:27.616423 update_engine[1995]: I20250130 14:01:27.616334 1995 update_attempter.cc:509] Updating boot flags... Jan 30 14:01:27.771447 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3148) Jan 30 14:01:28.221371 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3151) Jan 30 14:01:28.395926 kubelet[2858]: I0130 14:01:28.393404 2858 apiserver.go:52] "Watching apiserver" Jan 30 14:01:28.399662 kubelet[2858]: I0130 14:01:28.397375 2858 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-132" Jan 30 14:01:28.399662 kubelet[2858]: E0130 14:01:28.397431 2858 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-25-132\": node \"ip-172-31-25-132\" not found" Jan 30 14:01:28.476431 kubelet[2858]: E0130 14:01:28.473837 2858 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-132.181f7d3cd3c7f7a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-132,UID:ip-172-31-25-132,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-132,},FirstTimestamp:2025-01-30 14:01:21.392908198 +0000 UTC m=+1.745927384,LastTimestamp:2025-01-30 14:01:21.392908198 +0000 UTC m=+1.745927384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-132,}" Jan 30 14:01:28.512716 kubelet[2858]: I0130 14:01:28.512669 2858 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:01:28.514593 kubelet[2858]: I0130 14:01:28.514516 2858 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:28.567125 kubelet[2858]: E0130 14:01:28.563615 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 30 14:01:28.574205 kubelet[2858]: E0130 14:01:28.574046 2858 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-132.181f7d3cd4941097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-132,UID:ip-172-31-25-132,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-25-132,},FirstTimestamp:2025-01-30 14:01:21.406283927 +0000 UTC m=+1.759303029,LastTimestamp:2025-01-30 14:01:21.406283927 +0000 UTC m=+1.759303029,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-132,}" Jan 30 14:01:28.609177 kubelet[2858]: E0130 14:01:28.608970 2858 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-132\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:28.615767 kubelet[2858]: I0130 14:01:28.615363 2858 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:28.628049 kubelet[2858]: E0130 14:01:28.627695 2858 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-132\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:28.628049 kubelet[2858]: I0130 14:01:28.627740 2858 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:28.674347 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3151) Jan 30 14:01:29.207344 kubelet[2858]: I0130 14:01:29.205040 2858 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:30.430384 kubelet[2858]: I0130 14:01:30.430025 2858 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:30.609366 systemd[1]: Reloading requested from client PID 3402 ('systemctl') (unit session-9.scope)... Jan 30 14:01:30.609398 systemd[1]: Reloading... Jan 30 14:01:30.775455 zram_generator::config[3445]: No configuration found. Jan 30 14:01:30.998793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:31.204209 systemd[1]: Reloading finished in 594 ms. Jan 30 14:01:31.281079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:31.300357 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:01:31.301429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:31.301520 systemd[1]: kubelet.service: Consumed 2.506s CPU time, 126.7M memory peak, 0B memory swap peak. Jan 30 14:01:31.313893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:31.628588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:31.629952 (kubelet)[3502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:01:31.754067 kubelet[3502]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:31.754067 kubelet[3502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:01:31.754067 kubelet[3502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:31.754722 kubelet[3502]: I0130 14:01:31.753733 3502 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:01:31.773011 kubelet[3502]: I0130 14:01:31.772960 3502 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:01:31.773740 kubelet[3502]: I0130 14:01:31.773196 3502 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:01:31.773740 kubelet[3502]: I0130 14:01:31.773739 3502 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:01:31.776748 kubelet[3502]: I0130 14:01:31.776215 3502 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:01:31.780646 kubelet[3502]: I0130 14:01:31.780583 3502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:01:31.789238 kubelet[3502]: E0130 14:01:31.789141 3502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:01:31.789238 kubelet[3502]: I0130 14:01:31.789196 3502 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:01:31.793533 kubelet[3502]: I0130 14:01:31.793441 3502 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:01:31.794330 kubelet[3502]: I0130 14:01:31.794254 3502 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:01:31.794619 kubelet[3502]: I0130 14:01:31.794330 3502 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-132","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:01:31.794765 kubelet[3502]: I0130 14:01:31.794640 3502 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:01:31.794765 kubelet[3502]: I0130 14:01:31.794661 3502 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:01:31.794765 kubelet[3502]: I0130 14:01:31.794747 3502 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:31.796835 kubelet[3502]: I0130 14:01:31.796549 3502 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:01:31.796835 kubelet[3502]: I0130 14:01:31.796590 3502 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:01:31.796835 kubelet[3502]: I0130 14:01:31.796624 3502 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:01:31.796835 kubelet[3502]: I0130 14:01:31.796644 3502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:01:31.798073 kubelet[3502]: I0130 14:01:31.798040 3502 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:01:31.799034 kubelet[3502]: I0130 14:01:31.799006 3502 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:01:31.803367 kubelet[3502]: I0130 14:01:31.802959 3502 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:01:31.803367 kubelet[3502]: I0130 14:01:31.803022 3502 server.go:1287] "Started kubelet" Jan 30 14:01:31.825583 kubelet[3502]: I0130 14:01:31.823016 3502 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:01:31.828815 kubelet[3502]: I0130 14:01:31.828725 3502 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:01:31.829251 kubelet[3502]: I0130 14:01:31.829207 3502 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:01:31.830551 kubelet[3502]: I0130 14:01:31.830521 3502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:01:31.839089 kubelet[3502]: I0130 14:01:31.839020 3502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:01:31.852738 kubelet[3502]: I0130 14:01:31.830561 3502 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:01:31.860346 kubelet[3502]: I0130 14:01:31.859011 3502 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:01:31.860346 kubelet[3502]: E0130 14:01:31.859439 3502 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-132\" not found" Jan 30 14:01:31.860346 kubelet[3502]: I0130 14:01:31.860281 3502 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:01:31.860580 kubelet[3502]: I0130 14:01:31.860525 3502 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:01:31.884586 kubelet[3502]: I0130 14:01:31.883876 3502 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:01:31.909095 kubelet[3502]: I0130 14:01:31.908822 3502 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:01:31.914290 kubelet[3502]: I0130 14:01:31.914257 3502 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:01:31.944473 kubelet[3502]: I0130 14:01:31.944037 3502 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:01:31.950886 kubelet[3502]: I0130 14:01:31.950822 3502 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:01:31.950886 kubelet[3502]: I0130 14:01:31.950876 3502 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:01:31.951097 kubelet[3502]: I0130 14:01:31.950915 3502 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:01:31.951097 kubelet[3502]: I0130 14:01:31.950929 3502 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:01:31.951097 kubelet[3502]: E0130 14:01:31.951000 3502 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:01:32.049896 kubelet[3502]: I0130 14:01:32.049861 3502 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050363 3502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050407 3502 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050648 3502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050668 3502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050701 3502 policy_none.go:49] "None policy: Start" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050721 3502 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050740 3502 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:01:32.050985 kubelet[3502]: I0130 14:01:32.050916 3502 state_mem.go:75] "Updated machine memory state" Jan 30 14:01:32.052106 kubelet[3502]: E0130 14:01:32.052062 3502 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:01:32.060354 kubelet[3502]: I0130 14:01:32.060237 3502 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:01:32.060929 kubelet[3502]: I0130 14:01:32.060795 3502 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:01:32.060929 kubelet[3502]: I0130 14:01:32.060842 3502 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:01:32.061599 kubelet[3502]: I0130 14:01:32.061543 3502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:01:32.071059 kubelet[3502]: E0130 14:01:32.066853 3502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:01:32.193987 kubelet[3502]: I0130 14:01:32.193776 3502 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-132" Jan 30 14:01:32.219389 kubelet[3502]: I0130 14:01:32.218400 3502 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-25-132" Jan 30 14:01:32.219389 kubelet[3502]: I0130 14:01:32.218517 3502 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-132" Jan 30 14:01:32.254767 kubelet[3502]: I0130 14:01:32.254731 3502 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:32.256498 kubelet[3502]: I0130 14:01:32.255239 3502 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.256875 kubelet[3502]: I0130 14:01:32.255377 3502 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:32.263161 kubelet[3502]: I0130 14:01:32.262232 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46e34c6e5f2190805aa2e647797adfa7-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-132\" (UID: \"46e34c6e5f2190805aa2e647797adfa7\") " pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:32.263161 kubelet[3502]: I0130 14:01:32.262325 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46e34c6e5f2190805aa2e647797adfa7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-132\" (UID: \"46e34c6e5f2190805aa2e647797adfa7\") " pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:32.263161 kubelet[3502]: I0130 14:01:32.262371 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.263161 kubelet[3502]: I0130 14:01:32.262414 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f14e6ea138f82918d51bc0084d1a6b1-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-132\" (UID: \"8f14e6ea138f82918d51bc0084d1a6b1\") " pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:32.263161 kubelet[3502]: I0130 14:01:32.262454 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46e34c6e5f2190805aa2e647797adfa7-ca-certs\") pod \"kube-apiserver-ip-172-31-25-132\" (UID: \"46e34c6e5f2190805aa2e647797adfa7\") " pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:32.263565 kubelet[3502]: I0130 14:01:32.262506 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.263565 kubelet[3502]: I0130 14:01:32.262548 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.263565 kubelet[3502]: I0130 14:01:32.262585 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.263565 kubelet[3502]: I0130 14:01:32.262625 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9aa4c00e85b9e5ecf1b35b2913feaac-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-132\" (UID: \"c9aa4c00e85b9e5ecf1b35b2913feaac\") " pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.283034 kubelet[3502]: E0130 14:01:32.282912 3502 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-132\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-25-132" Jan 30 14:01:32.286147 kubelet[3502]: E0130 14:01:32.286090 3502 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-132\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:32.288363 kubelet[3502]: E0130 14:01:32.288282 3502 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-132\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:32.811376 kubelet[3502]: I0130 14:01:32.811294 3502 apiserver.go:52] "Watching apiserver" Jan 30 14:01:32.860458 kubelet[3502]: I0130 14:01:32.860409 3502 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:01:32.994121 kubelet[3502]: I0130 14:01:32.993137 3502 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:32.994723 kubelet[3502]: I0130 14:01:32.994692 3502 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:33.064129 kubelet[3502]: E0130 14:01:33.063513 3502 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-132\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-132" Jan 30 14:01:33.079923 kubelet[3502]: E0130 14:01:33.079861 3502 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-132\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-132" Jan 30 14:01:33.256583 kubelet[3502]: I0130 14:01:33.256456 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-132" podStartSLOduration=3.25643566 podStartE2EDuration="3.25643566s" podCreationTimestamp="2025-01-30 14:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:33.195433222 +0000 UTC m=+1.555662815" watchObservedRunningTime="2025-01-30 14:01:33.25643566 +0000 UTC m=+1.616665217" Jan 30 14:01:33.312336 kubelet[3502]: I0130 14:01:33.311945 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-132" podStartSLOduration=5.311921389 podStartE2EDuration="5.311921389s" podCreationTimestamp="2025-01-30 14:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:33.257841298 +0000 UTC m=+1.618070856" watchObservedRunningTime="2025-01-30 14:01:33.311921389 +0000 UTC m=+1.672150958" Jan 30 14:01:33.377823 kubelet[3502]: I0130 14:01:33.377366 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-132" podStartSLOduration=4.377344017 podStartE2EDuration="4.377344017s" podCreationTimestamp="2025-01-30 14:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:33.313522786 +0000 UTC m=+1.673752427" watchObservedRunningTime="2025-01-30 14:01:33.377344017 +0000 UTC m=+1.737573598" Jan 30 14:01:36.895566 kubelet[3502]: I0130 14:01:36.895497 3502 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:01:36.897668 containerd[2008]: time="2025-01-30T14:01:36.896552750Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:01:36.898247 kubelet[3502]: I0130 14:01:36.896870 3502 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:01:37.584692 systemd[1]: Created slice kubepods-besteffort-pod41a4229d_2449_4f45_83a1_2297fa1670d7.slice - libcontainer container kubepods-besteffort-pod41a4229d_2449_4f45_83a1_2297fa1670d7.slice. Jan 30 14:01:37.597345 kubelet[3502]: I0130 14:01:37.596415 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41a4229d-2449-4f45-83a1-2297fa1670d7-xtables-lock\") pod \"kube-proxy-k2hb7\" (UID: \"41a4229d-2449-4f45-83a1-2297fa1670d7\") " pod="kube-system/kube-proxy-k2hb7" Jan 30 14:01:37.597877 kubelet[3502]: I0130 14:01:37.597639 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jvx\" (UniqueName: \"kubernetes.io/projected/41a4229d-2449-4f45-83a1-2297fa1670d7-kube-api-access-x2jvx\") pod \"kube-proxy-k2hb7\" (UID: \"41a4229d-2449-4f45-83a1-2297fa1670d7\") " pod="kube-system/kube-proxy-k2hb7" Jan 30 14:01:37.597877 kubelet[3502]: I0130 14:01:37.597796 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/41a4229d-2449-4f45-83a1-2297fa1670d7-kube-proxy\") pod \"kube-proxy-k2hb7\" (UID: \"41a4229d-2449-4f45-83a1-2297fa1670d7\") " pod="kube-system/kube-proxy-k2hb7" Jan 30 14:01:37.598114 kubelet[3502]: I0130 14:01:37.597845 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a4229d-2449-4f45-83a1-2297fa1670d7-lib-modules\") pod \"kube-proxy-k2hb7\" (UID: \"41a4229d-2449-4f45-83a1-2297fa1670d7\") " pod="kube-system/kube-proxy-k2hb7" Jan 30 14:01:37.891389 systemd[1]: Created slice kubepods-besteffort-pod287f83cc_a243_4223_b89c_7f445ba2ab50.slice - libcontainer container kubepods-besteffort-pod287f83cc_a243_4223_b89c_7f445ba2ab50.slice. Jan 30 14:01:37.898399 containerd[2008]: time="2025-01-30T14:01:37.898008935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2hb7,Uid:41a4229d-2449-4f45-83a1-2297fa1670d7,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:37.900405 kubelet[3502]: I0130 14:01:37.900348 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2s58\" (UniqueName: \"kubernetes.io/projected/287f83cc-a243-4223-b89c-7f445ba2ab50-kube-api-access-f2s58\") pod \"tigera-operator-7d68577dc5-dwc47\" (UID: \"287f83cc-a243-4223-b89c-7f445ba2ab50\") " pod="tigera-operator/tigera-operator-7d68577dc5-dwc47" Jan 30 14:01:37.901382 kubelet[3502]: I0130 14:01:37.900414 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/287f83cc-a243-4223-b89c-7f445ba2ab50-var-lib-calico\") pod \"tigera-operator-7d68577dc5-dwc47\" (UID: \"287f83cc-a243-4223-b89c-7f445ba2ab50\") " pod="tigera-operator/tigera-operator-7d68577dc5-dwc47" Jan 30 14:01:37.956791 containerd[2008]: time="2025-01-30T14:01:37.956161029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:37.956791 containerd[2008]: time="2025-01-30T14:01:37.956262336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:37.956791 containerd[2008]: time="2025-01-30T14:01:37.956340411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:37.956791 containerd[2008]: time="2025-01-30T14:01:37.956498013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:38.006680 systemd[1]: Started cri-containerd-ff3d42e8d2720dddd9f126509bc4703e3799d5f379ab63c849f8ea9a1975ff6c.scope - libcontainer container ff3d42e8d2720dddd9f126509bc4703e3799d5f379ab63c849f8ea9a1975ff6c. Jan 30 14:01:38.073464 containerd[2008]: time="2025-01-30T14:01:38.073381383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k2hb7,Uid:41a4229d-2449-4f45-83a1-2297fa1670d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff3d42e8d2720dddd9f126509bc4703e3799d5f379ab63c849f8ea9a1975ff6c\"" Jan 30 14:01:38.078912 containerd[2008]: time="2025-01-30T14:01:38.078846118Z" level=info msg="CreateContainer within sandbox \"ff3d42e8d2720dddd9f126509bc4703e3799d5f379ab63c849f8ea9a1975ff6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:01:38.113880 containerd[2008]: time="2025-01-30T14:01:38.113747656Z" level=info msg="CreateContainer within sandbox \"ff3d42e8d2720dddd9f126509bc4703e3799d5f379ab63c849f8ea9a1975ff6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04c870716357d8caec4957703ef36219049c66c6279009eef5656ae5fc3b08c8\"" Jan 30 14:01:38.116346 containerd[2008]: time="2025-01-30T14:01:38.114799598Z" level=info msg="StartContainer for \"04c870716357d8caec4957703ef36219049c66c6279009eef5656ae5fc3b08c8\"" Jan 30 14:01:38.163639 systemd[1]: Started cri-containerd-04c870716357d8caec4957703ef36219049c66c6279009eef5656ae5fc3b08c8.scope - libcontainer container 04c870716357d8caec4957703ef36219049c66c6279009eef5656ae5fc3b08c8. Jan 30 14:01:38.202152 containerd[2008]: time="2025-01-30T14:01:38.202088813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-dwc47,Uid:287f83cc-a243-4223-b89c-7f445ba2ab50,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:01:38.219971 containerd[2008]: time="2025-01-30T14:01:38.219710368Z" level=info msg="StartContainer for \"04c870716357d8caec4957703ef36219049c66c6279009eef5656ae5fc3b08c8\" returns successfully" Jan 30 14:01:38.255239 containerd[2008]: time="2025-01-30T14:01:38.254533411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:38.255239 containerd[2008]: time="2025-01-30T14:01:38.254665177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:38.255239 containerd[2008]: time="2025-01-30T14:01:38.254708302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:38.255239 containerd[2008]: time="2025-01-30T14:01:38.254919104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:38.297944 systemd[1]: Started cri-containerd-da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019.scope - libcontainer container da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019. Jan 30 14:01:38.371161 containerd[2008]: time="2025-01-30T14:01:38.370924678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-dwc47,Uid:287f83cc-a243-4223-b89c-7f445ba2ab50,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019\"" Jan 30 14:01:38.376858 containerd[2008]: time="2025-01-30T14:01:38.376793343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:01:39.064651 kubelet[3502]: I0130 14:01:39.064546 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k2hb7" podStartSLOduration=2.064520295 podStartE2EDuration="2.064520295s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:39.063430199 +0000 UTC m=+7.423659792" watchObservedRunningTime="2025-01-30 14:01:39.064520295 +0000 UTC m=+7.424749876" Jan 30 14:01:39.154152 sudo[2366]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:39.176533 sshd[2363]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:39.182101 systemd[1]: sshd@8-172.31.25.132:22-139.178.89.65:34752.service: Deactivated successfully. Jan 30 14:01:39.185232 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:01:39.185718 systemd[1]: session-9.scope: Consumed 8.837s CPU time, 151.6M memory peak, 0B memory swap peak. Jan 30 14:01:39.189751 systemd-logind[1994]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:01:39.192494 systemd-logind[1994]: Removed session 9. Jan 30 14:01:40.231018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046404980.mount: Deactivated successfully. Jan 30 14:01:40.863360 containerd[2008]: time="2025-01-30T14:01:40.863262770Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:40.865408 containerd[2008]: time="2025-01-30T14:01:40.865334453Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:01:40.867879 containerd[2008]: time="2025-01-30T14:01:40.867788167Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:40.873002 containerd[2008]: time="2025-01-30T14:01:40.872904524Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:40.874619 containerd[2008]: time="2025-01-30T14:01:40.874406283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.497547327s" Jan 30 14:01:40.874619 containerd[2008]: time="2025-01-30T14:01:40.874462855Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:01:40.878791 containerd[2008]: time="2025-01-30T14:01:40.878606126Z" level=info msg="CreateContainer within sandbox \"da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:01:40.912086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216524572.mount: Deactivated successfully. Jan 30 14:01:40.912969 containerd[2008]: time="2025-01-30T14:01:40.912895178Z" level=info msg="CreateContainer within sandbox \"da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490\"" Jan 30 14:01:40.915001 containerd[2008]: time="2025-01-30T14:01:40.914928166Z" level=info msg="StartContainer for \"5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490\"" Jan 30 14:01:40.967619 systemd[1]: Started cri-containerd-5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490.scope - libcontainer container 5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490. Jan 30 14:01:41.013940 containerd[2008]: time="2025-01-30T14:01:41.013732756Z" level=info msg="StartContainer for \"5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490\" returns successfully" Jan 30 14:01:47.414357 kubelet[3502]: I0130 14:01:47.410276 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-dwc47" podStartSLOduration=7.909731961 podStartE2EDuration="10.410253139s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="2025-01-30 14:01:38.375373249 +0000 UTC m=+6.735602806" lastFinishedPulling="2025-01-30 14:01:40.875894426 +0000 UTC m=+9.236123984" observedRunningTime="2025-01-30 14:01:41.071054347 +0000 UTC m=+9.431284012" watchObservedRunningTime="2025-01-30 14:01:47.410253139 +0000 UTC m=+15.770482732" Jan 30 14:01:47.429728 systemd[1]: Created slice kubepods-besteffort-pod79487506_8878_4a6f_b085_a4652a2ec14c.slice - libcontainer container kubepods-besteffort-pod79487506_8878_4a6f_b085_a4652a2ec14c.slice. Jan 30 14:01:47.466597 kubelet[3502]: I0130 14:01:47.466531 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8q2k\" (UniqueName: \"kubernetes.io/projected/79487506-8878-4a6f-b085-a4652a2ec14c-kube-api-access-q8q2k\") pod \"calico-typha-56b88b7f-pvjbt\" (UID: \"79487506-8878-4a6f-b085-a4652a2ec14c\") " pod="calico-system/calico-typha-56b88b7f-pvjbt" Jan 30 14:01:47.466778 kubelet[3502]: I0130 14:01:47.466613 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79487506-8878-4a6f-b085-a4652a2ec14c-tigera-ca-bundle\") pod \"calico-typha-56b88b7f-pvjbt\" (UID: \"79487506-8878-4a6f-b085-a4652a2ec14c\") " pod="calico-system/calico-typha-56b88b7f-pvjbt" Jan 30 14:01:47.466778 kubelet[3502]: I0130 14:01:47.466656 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/79487506-8878-4a6f-b085-a4652a2ec14c-typha-certs\") pod \"calico-typha-56b88b7f-pvjbt\" (UID: \"79487506-8878-4a6f-b085-a4652a2ec14c\") " pod="calico-system/calico-typha-56b88b7f-pvjbt" Jan 30 14:01:47.726170 systemd[1]: Created slice kubepods-besteffort-podbb748fea_aa73_4607_acf7_67ed63ac8813.slice - libcontainer container kubepods-besteffort-podbb748fea_aa73_4607_acf7_67ed63ac8813.slice. Jan 30 14:01:47.741873 containerd[2008]: time="2025-01-30T14:01:47.741808808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56b88b7f-pvjbt,Uid:79487506-8878-4a6f-b085-a4652a2ec14c,Namespace:calico-system,Attempt:0,}" Jan 30 14:01:47.771352 kubelet[3502]: I0130 14:01:47.769483 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-lib-modules\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.771352 kubelet[3502]: I0130 14:01:47.769557 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-var-run-calico\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.771352 kubelet[3502]: I0130 14:01:47.770392 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-cni-log-dir\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.771352 kubelet[3502]: I0130 14:01:47.770441 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-policysync\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.771352 kubelet[3502]: I0130 14:01:47.770520 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-var-lib-calico\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.771726 kubelet[3502]: I0130 14:01:47.770565 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-cni-net-dir\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.771726 kubelet[3502]: I0130 14:01:47.770607 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb748fea-aa73-4607-acf7-67ed63ac8813-tigera-ca-bundle\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.773054 kubelet[3502]: I0130 14:01:47.770650 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-flexvol-driver-host\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.773054 kubelet[3502]: I0130 14:01:47.772435 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmpcd\" (UniqueName: \"kubernetes.io/projected/bb748fea-aa73-4607-acf7-67ed63ac8813-kube-api-access-hmpcd\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.773054 kubelet[3502]: I0130 14:01:47.772508 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb748fea-aa73-4607-acf7-67ed63ac8813-node-certs\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.773054 kubelet[3502]: I0130 14:01:47.772550 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-cni-bin-dir\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.773054 kubelet[3502]: I0130 14:01:47.772631 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb748fea-aa73-4607-acf7-67ed63ac8813-xtables-lock\") pod \"calico-node-pm8v8\" (UID: \"bb748fea-aa73-4607-acf7-67ed63ac8813\") " pod="calico-system/calico-node-pm8v8" Jan 30 14:01:47.823642 containerd[2008]: time="2025-01-30T14:01:47.822486667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:47.823642 containerd[2008]: time="2025-01-30T14:01:47.822605347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:47.823642 containerd[2008]: time="2025-01-30T14:01:47.822646971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:47.824454 containerd[2008]: time="2025-01-30T14:01:47.824214643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:47.883687 kubelet[3502]: E0130 14:01:47.883641 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.884017 kubelet[3502]: W0130 14:01:47.883893 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.884017 kubelet[3502]: E0130 14:01:47.883969 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.889874 kubelet[3502]: E0130 14:01:47.889809 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.889874 kubelet[3502]: W0130 14:01:47.889855 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.890535 kubelet[3502]: E0130 14:01:47.889896 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.891018 kubelet[3502]: E0130 14:01:47.890814 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.891018 kubelet[3502]: W0130 14:01:47.890847 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.891018 kubelet[3502]: E0130 14:01:47.890879 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.892486 kubelet[3502]: E0130 14:01:47.891448 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.892486 kubelet[3502]: W0130 14:01:47.891481 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.892486 kubelet[3502]: E0130 14:01:47.891511 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.893202 kubelet[3502]: E0130 14:01:47.892839 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.893202 kubelet[3502]: W0130 14:01:47.892867 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.893202 kubelet[3502]: E0130 14:01:47.892902 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.895073 systemd[1]: Started cri-containerd-a0dc7d5ce628716e0503f41bb67eb27eb8a49c1a71c6c408fc5ef0a231c2e5c8.scope - libcontainer container a0dc7d5ce628716e0503f41bb67eb27eb8a49c1a71c6c408fc5ef0a231c2e5c8. Jan 30 14:01:47.911746 kubelet[3502]: E0130 14:01:47.911692 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.911746 kubelet[3502]: W0130 14:01:47.911738 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.911967 kubelet[3502]: E0130 14:01:47.911776 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.934605 kubelet[3502]: E0130 14:01:47.934552 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:47.934605 kubelet[3502]: W0130 14:01:47.934588 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:47.934833 kubelet[3502]: E0130 14:01:47.934622 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:47.985790 kubelet[3502]: E0130 14:01:47.983854 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:01:48.036864 containerd[2008]: time="2025-01-30T14:01:48.036781907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pm8v8,Uid:bb748fea-aa73-4607-acf7-67ed63ac8813,Namespace:calico-system,Attempt:0,}" Jan 30 14:01:48.069277 kubelet[3502]: E0130 14:01:48.069200 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.069277 kubelet[3502]: W0130 14:01:48.069240 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.069277 kubelet[3502]: E0130 14:01:48.069276 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.070366 kubelet[3502]: E0130 14:01:48.069665 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.070366 kubelet[3502]: W0130 14:01:48.069685 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.070366 kubelet[3502]: E0130 14:01:48.069751 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.070366 kubelet[3502]: E0130 14:01:48.070103 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.070366 kubelet[3502]: W0130 14:01:48.070121 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.070366 kubelet[3502]: E0130 14:01:48.070143 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.071928 kubelet[3502]: E0130 14:01:48.070515 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.071928 kubelet[3502]: W0130 14:01:48.070533 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.071928 kubelet[3502]: E0130 14:01:48.070556 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.073582 kubelet[3502]: E0130 14:01:48.073414 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.073582 kubelet[3502]: W0130 14:01:48.073563 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.073803 kubelet[3502]: E0130 14:01:48.073598 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.074794 kubelet[3502]: E0130 14:01:48.074721 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.074794 kubelet[3502]: W0130 14:01:48.074762 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.074794 kubelet[3502]: E0130 14:01:48.074796 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.076103 kubelet[3502]: E0130 14:01:48.076049 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.076103 kubelet[3502]: W0130 14:01:48.076088 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.076269 kubelet[3502]: E0130 14:01:48.076122 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.077284 kubelet[3502]: E0130 14:01:48.077106 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.077284 kubelet[3502]: W0130 14:01:48.077136 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.077284 kubelet[3502]: E0130 14:01:48.077167 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.078325 kubelet[3502]: E0130 14:01:48.078216 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.078540 kubelet[3502]: W0130 14:01:48.078426 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.078540 kubelet[3502]: E0130 14:01:48.078471 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.079907 kubelet[3502]: E0130 14:01:48.079771 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.079907 kubelet[3502]: W0130 14:01:48.079808 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.079907 kubelet[3502]: E0130 14:01:48.079840 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.081540 kubelet[3502]: E0130 14:01:48.081478 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.081540 kubelet[3502]: W0130 14:01:48.081517 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.081745 kubelet[3502]: E0130 14:01:48.081551 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.081997 kubelet[3502]: E0130 14:01:48.081896 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.081997 kubelet[3502]: W0130 14:01:48.081922 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.081997 kubelet[3502]: E0130 14:01:48.081946 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.083430 kubelet[3502]: E0130 14:01:48.083221 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.083430 kubelet[3502]: W0130 14:01:48.083316 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.083430 kubelet[3502]: E0130 14:01:48.083350 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.085900 kubelet[3502]: E0130 14:01:48.084286 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.085900 kubelet[3502]: W0130 14:01:48.084630 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.085900 kubelet[3502]: E0130 14:01:48.084672 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.086680 kubelet[3502]: E0130 14:01:48.086631 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.086680 kubelet[3502]: W0130 14:01:48.086668 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.086831 kubelet[3502]: E0130 14:01:48.086702 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.087840 kubelet[3502]: E0130 14:01:48.087791 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.087840 kubelet[3502]: W0130 14:01:48.087828 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.088125 kubelet[3502]: E0130 14:01:48.087861 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.088682 kubelet[3502]: E0130 14:01:48.088635 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.088682 kubelet[3502]: W0130 14:01:48.088668 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.088859 kubelet[3502]: E0130 14:01:48.088699 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.090391 kubelet[3502]: E0130 14:01:48.089705 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.090391 kubelet[3502]: W0130 14:01:48.089744 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.090391 kubelet[3502]: E0130 14:01:48.089776 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.092294 kubelet[3502]: E0130 14:01:48.091911 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.092294 kubelet[3502]: W0130 14:01:48.091949 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.092294 kubelet[3502]: E0130 14:01:48.092005 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.092973 kubelet[3502]: E0130 14:01:48.092520 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.092973 kubelet[3502]: W0130 14:01:48.092544 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.092973 kubelet[3502]: E0130 14:01:48.092571 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.095203 kubelet[3502]: E0130 14:01:48.094779 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.095203 kubelet[3502]: W0130 14:01:48.094819 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.095203 kubelet[3502]: E0130 14:01:48.094855 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.095203 kubelet[3502]: I0130 14:01:48.094909 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b56eeb33-9e79-4757-a325-1b7299a49fcc-socket-dir\") pod \"csi-node-driver-qdvb4\" (UID: \"b56eeb33-9e79-4757-a325-1b7299a49fcc\") " pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:01:48.096622 kubelet[3502]: E0130 14:01:48.096462 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.096622 kubelet[3502]: W0130 14:01:48.096519 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.096622 kubelet[3502]: E0130 14:01:48.096574 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.096895 kubelet[3502]: I0130 14:01:48.096813 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b56eeb33-9e79-4757-a325-1b7299a49fcc-registration-dir\") pod \"csi-node-driver-qdvb4\" (UID: \"b56eeb33-9e79-4757-a325-1b7299a49fcc\") " pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:01:48.098190 kubelet[3502]: E0130 14:01:48.098129 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.098190 kubelet[3502]: W0130 14:01:48.098176 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.098406 kubelet[3502]: E0130 14:01:48.098221 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.100334 kubelet[3502]: E0130 14:01:48.099773 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.100334 kubelet[3502]: W0130 14:01:48.099810 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.100334 kubelet[3502]: E0130 14:01:48.099936 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.101471 kubelet[3502]: E0130 14:01:48.101380 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.101471 kubelet[3502]: W0130 14:01:48.101457 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.102036 kubelet[3502]: E0130 14:01:48.101801 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.102036 kubelet[3502]: I0130 14:01:48.101868 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b56eeb33-9e79-4757-a325-1b7299a49fcc-varrun\") pod \"csi-node-driver-qdvb4\" (UID: \"b56eeb33-9e79-4757-a325-1b7299a49fcc\") " pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:01:48.103205 kubelet[3502]: E0130 14:01:48.102911 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.103205 kubelet[3502]: W0130 14:01:48.102948 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.103205 kubelet[3502]: E0130 14:01:48.103199 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.104404 kubelet[3502]: E0130 14:01:48.104350 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.104404 kubelet[3502]: W0130 14:01:48.104390 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.105183 kubelet[3502]: E0130 14:01:48.104805 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.105797 kubelet[3502]: E0130 14:01:48.105755 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.105797 kubelet[3502]: W0130 14:01:48.105789 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.106601 kubelet[3502]: E0130 14:01:48.106527 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.106731 kubelet[3502]: I0130 14:01:48.106601 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b56eeb33-9e79-4757-a325-1b7299a49fcc-kubelet-dir\") pod \"csi-node-driver-qdvb4\" (UID: \"b56eeb33-9e79-4757-a325-1b7299a49fcc\") " pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:01:48.107598 kubelet[3502]: E0130 14:01:48.107545 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.107598 kubelet[3502]: W0130 14:01:48.107583 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.107598 kubelet[3502]: E0130 14:01:48.107629 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.109278 kubelet[3502]: E0130 14:01:48.109183 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.109278 kubelet[3502]: W0130 14:01:48.109211 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.109278 kubelet[3502]: E0130 14:01:48.109244 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.110436 kubelet[3502]: E0130 14:01:48.110386 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.110436 kubelet[3502]: W0130 14:01:48.110424 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.110704 kubelet[3502]: E0130 14:01:48.110665 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.111136 kubelet[3502]: I0130 14:01:48.110981 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc6s\" (UniqueName: \"kubernetes.io/projected/b56eeb33-9e79-4757-a325-1b7299a49fcc-kube-api-access-jvc6s\") pod \"csi-node-driver-qdvb4\" (UID: \"b56eeb33-9e79-4757-a325-1b7299a49fcc\") " pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:01:48.112617 kubelet[3502]: E0130 14:01:48.112531 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.112617 kubelet[3502]: W0130 14:01:48.112571 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.112828 kubelet[3502]: E0130 14:01:48.112627 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.114722 kubelet[3502]: E0130 14:01:48.114667 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.114722 kubelet[3502]: W0130 14:01:48.114708 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.115555 kubelet[3502]: E0130 14:01:48.115111 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.117419 kubelet[3502]: E0130 14:01:48.117366 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.117419 kubelet[3502]: W0130 14:01:48.117411 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.118059 kubelet[3502]: E0130 14:01:48.117447 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.118059 kubelet[3502]: E0130 14:01:48.117812 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.118059 kubelet[3502]: W0130 14:01:48.117832 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.118059 kubelet[3502]: E0130 14:01:48.117855 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.122681 containerd[2008]: time="2025-01-30T14:01:48.122409624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:48.123423 containerd[2008]: time="2025-01-30T14:01:48.122761784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:48.123423 containerd[2008]: time="2025-01-30T14:01:48.122887403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:48.123726 containerd[2008]: time="2025-01-30T14:01:48.123448875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:48.174277 systemd[1]: Started cri-containerd-f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84.scope - libcontainer container f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84. Jan 30 14:01:48.212623 kubelet[3502]: E0130 14:01:48.212557 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.212623 kubelet[3502]: W0130 14:01:48.212597 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.212623 kubelet[3502]: E0130 14:01:48.212631 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.213263 kubelet[3502]: E0130 14:01:48.213224 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.213263 kubelet[3502]: W0130 14:01:48.213256 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.214087 kubelet[3502]: E0130 14:01:48.213321 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.214087 kubelet[3502]: E0130 14:01:48.213928 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.214087 kubelet[3502]: W0130 14:01:48.213951 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.214087 kubelet[3502]: E0130 14:01:48.214063 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.215426 kubelet[3502]: E0130 14:01:48.214570 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.215426 kubelet[3502]: W0130 14:01:48.214591 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.215426 kubelet[3502]: E0130 14:01:48.214821 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.215426 kubelet[3502]: E0130 14:01:48.215206 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.215426 kubelet[3502]: W0130 14:01:48.215231 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.215426 kubelet[3502]: E0130 14:01:48.215269 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.216493 kubelet[3502]: E0130 14:01:48.215897 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.216493 kubelet[3502]: W0130 14:01:48.215919 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.216493 kubelet[3502]: E0130 14:01:48.215977 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.216493 kubelet[3502]: E0130 14:01:48.216478 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.217986 kubelet[3502]: W0130 14:01:48.216498 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.217986 kubelet[3502]: E0130 14:01:48.216696 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.217986 kubelet[3502]: E0130 14:01:48.217228 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.217986 kubelet[3502]: W0130 14:01:48.217282 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.217986 kubelet[3502]: E0130 14:01:48.217398 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.217986 kubelet[3502]: E0130 14:01:48.217844 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.217986 kubelet[3502]: W0130 14:01:48.217863 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.217986 kubelet[3502]: E0130 14:01:48.217912 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.219946 kubelet[3502]: E0130 14:01:48.218444 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.219946 kubelet[3502]: W0130 14:01:48.218467 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.219946 kubelet[3502]: E0130 14:01:48.218619 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.219946 kubelet[3502]: E0130 14:01:48.218973 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.219946 kubelet[3502]: W0130 14:01:48.218993 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.219946 kubelet[3502]: E0130 14:01:48.219393 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.219946 kubelet[3502]: E0130 14:01:48.219847 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.219946 kubelet[3502]: W0130 14:01:48.219871 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.221443 kubelet[3502]: E0130 14:01:48.220439 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.221443 kubelet[3502]: W0130 14:01:48.220460 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.221443 kubelet[3502]: E0130 14:01:48.220746 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.221443 kubelet[3502]: E0130 14:01:48.220767 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.221443 kubelet[3502]: E0130 14:01:48.220755 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.221443 kubelet[3502]: W0130 14:01:48.220799 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.221443 kubelet[3502]: E0130 14:01:48.220855 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.223268 kubelet[3502]: E0130 14:01:48.222565 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.223268 kubelet[3502]: W0130 14:01:48.222601 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.223268 kubelet[3502]: E0130 14:01:48.222705 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.224510 kubelet[3502]: E0130 14:01:48.224071 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.224510 kubelet[3502]: W0130 14:01:48.224100 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.224510 kubelet[3502]: E0130 14:01:48.224176 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.226351 kubelet[3502]: E0130 14:01:48.225525 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.226351 kubelet[3502]: W0130 14:01:48.225559 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.226351 kubelet[3502]: E0130 14:01:48.225624 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.228076 kubelet[3502]: E0130 14:01:48.227611 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.228076 kubelet[3502]: W0130 14:01:48.227648 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.228076 kubelet[3502]: E0130 14:01:48.227716 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.230172 kubelet[3502]: E0130 14:01:48.229833 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.230172 kubelet[3502]: W0130 14:01:48.229869 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.230172 kubelet[3502]: E0130 14:01:48.229941 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.232241 kubelet[3502]: E0130 14:01:48.231541 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.232241 kubelet[3502]: W0130 14:01:48.231578 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.232241 kubelet[3502]: E0130 14:01:48.231657 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.235403 kubelet[3502]: E0130 14:01:48.234963 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.235403 kubelet[3502]: W0130 14:01:48.234998 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.235403 kubelet[3502]: E0130 14:01:48.235064 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.238974 kubelet[3502]: E0130 14:01:48.236417 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.239423 kubelet[3502]: W0130 14:01:48.239370 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.241502 kubelet[3502]: E0130 14:01:48.241443 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.243898 kubelet[3502]: E0130 14:01:48.243846 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.244700 kubelet[3502]: W0130 14:01:48.244349 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.245851 kubelet[3502]: E0130 14:01:48.244989 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.248352 kubelet[3502]: E0130 14:01:48.247597 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.250523 kubelet[3502]: W0130 14:01:48.248828 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.250523 kubelet[3502]: E0130 14:01:48.250433 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.251390 kubelet[3502]: E0130 14:01:48.251084 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.251390 kubelet[3502]: W0130 14:01:48.251121 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.251390 kubelet[3502]: E0130 14:01:48.251152 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.294293 kubelet[3502]: E0130 14:01:48.294082 3502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:01:48.294293 kubelet[3502]: W0130 14:01:48.294117 3502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:01:48.294293 kubelet[3502]: E0130 14:01:48.294148 3502 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:01:48.310714 containerd[2008]: time="2025-01-30T14:01:48.310112053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pm8v8,Uid:bb748fea-aa73-4607-acf7-67ed63ac8813,Namespace:calico-system,Attempt:0,} returns sandbox id \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\"" Jan 30 14:01:48.316830 containerd[2008]: time="2025-01-30T14:01:48.316535059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:01:48.367785 containerd[2008]: time="2025-01-30T14:01:48.367721284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56b88b7f-pvjbt,Uid:79487506-8878-4a6f-b085-a4652a2ec14c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0dc7d5ce628716e0503f41bb67eb27eb8a49c1a71c6c408fc5ef0a231c2e5c8\"" Jan 30 14:01:49.848567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262399668.mount: Deactivated successfully. Jan 30 14:01:49.952813 kubelet[3502]: E0130 14:01:49.951918 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:01:50.080810 containerd[2008]: time="2025-01-30T14:01:50.080754773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:50.083427 containerd[2008]: time="2025-01-30T14:01:50.083365862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 30 14:01:50.086407 containerd[2008]: time="2025-01-30T14:01:50.086086205Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:50.090422 containerd[2008]: time="2025-01-30T14:01:50.090293768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:50.091962 containerd[2008]: time="2025-01-30T14:01:50.091747118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.775145401s" Jan 30 14:01:50.091962 containerd[2008]: time="2025-01-30T14:01:50.091805924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:01:50.095540 containerd[2008]: time="2025-01-30T14:01:50.095360192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:01:50.097997 containerd[2008]: time="2025-01-30T14:01:50.097687111Z" level=info msg="CreateContainer within sandbox \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:01:50.136814 containerd[2008]: time="2025-01-30T14:01:50.136549320Z" level=info msg="CreateContainer within sandbox \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc\"" Jan 30 14:01:50.138346 containerd[2008]: time="2025-01-30T14:01:50.138174933Z" level=info msg="StartContainer for \"d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc\"" Jan 30 14:01:50.198674 systemd[1]: Started cri-containerd-d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc.scope - libcontainer container d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc. Jan 30 14:01:50.255564 containerd[2008]: time="2025-01-30T14:01:50.255476603Z" level=info msg="StartContainer for \"d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc\" returns successfully" Jan 30 14:01:50.284500 systemd[1]: cri-containerd-d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc.scope: Deactivated successfully. Jan 30 14:01:50.467945 containerd[2008]: time="2025-01-30T14:01:50.467796075Z" level=info msg="shim disconnected" id=d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc namespace=k8s.io Jan 30 14:01:50.467945 containerd[2008]: time="2025-01-30T14:01:50.467899795Z" level=warning msg="cleaning up after shim disconnected" id=d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc namespace=k8s.io Jan 30 14:01:50.468892 containerd[2008]: time="2025-01-30T14:01:50.468522390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:50.796037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d51912901107eb3a595096721c750d801446d792abc772ca24c59cd30b5621bc-rootfs.mount: Deactivated successfully. Jan 30 14:01:51.970611 kubelet[3502]: E0130 14:01:51.970541 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:01:52.678168 containerd[2008]: time="2025-01-30T14:01:52.678088769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:52.679478 containerd[2008]: time="2025-01-30T14:01:52.679420919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 30 14:01:52.681789 containerd[2008]: time="2025-01-30T14:01:52.681596466Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:52.690628 containerd[2008]: time="2025-01-30T14:01:52.690454265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:52.693043 containerd[2008]: time="2025-01-30T14:01:52.692838728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.597411063s" Jan 30 14:01:52.693422 containerd[2008]: time="2025-01-30T14:01:52.693003438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:01:52.696626 containerd[2008]: time="2025-01-30T14:01:52.696499958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:01:52.720939 containerd[2008]: time="2025-01-30T14:01:52.720880638Z" level=info msg="CreateContainer within sandbox \"a0dc7d5ce628716e0503f41bb67eb27eb8a49c1a71c6c408fc5ef0a231c2e5c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:01:52.741383 containerd[2008]: time="2025-01-30T14:01:52.740642250Z" level=info msg="CreateContainer within sandbox \"a0dc7d5ce628716e0503f41bb67eb27eb8a49c1a71c6c408fc5ef0a231c2e5c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5a2522d9a0a9c92a6c2b4eadf4a588690aadb76baa193516814bba58e3bbc254\"" Jan 30 14:01:52.746532 containerd[2008]: time="2025-01-30T14:01:52.744223856Z" level=info msg="StartContainer for \"5a2522d9a0a9c92a6c2b4eadf4a588690aadb76baa193516814bba58e3bbc254\"" Jan 30 14:01:52.801600 systemd[1]: Started cri-containerd-5a2522d9a0a9c92a6c2b4eadf4a588690aadb76baa193516814bba58e3bbc254.scope - libcontainer container 5a2522d9a0a9c92a6c2b4eadf4a588690aadb76baa193516814bba58e3bbc254. Jan 30 14:01:52.868459 containerd[2008]: time="2025-01-30T14:01:52.868397137Z" level=info msg="StartContainer for \"5a2522d9a0a9c92a6c2b4eadf4a588690aadb76baa193516814bba58e3bbc254\" returns successfully" Jan 30 14:01:53.957153 kubelet[3502]: E0130 14:01:53.956731 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:01:54.083414 kubelet[3502]: I0130 14:01:54.082795 3502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:01:55.952689 kubelet[3502]: E0130 14:01:55.951904 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:01:57.258839 containerd[2008]: time="2025-01-30T14:01:57.257421211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:57.260559 containerd[2008]: time="2025-01-30T14:01:57.260500473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:01:57.262101 containerd[2008]: time="2025-01-30T14:01:57.262008751Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:57.265986 containerd[2008]: time="2025-01-30T14:01:57.265886269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:57.267834 containerd[2008]: time="2025-01-30T14:01:57.267515999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.570934653s" Jan 30 14:01:57.267834 containerd[2008]: time="2025-01-30T14:01:57.267573712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:01:57.273008 containerd[2008]: time="2025-01-30T14:01:57.272882645Z" level=info msg="CreateContainer within sandbox \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:01:57.298634 containerd[2008]: time="2025-01-30T14:01:57.298474695Z" level=info msg="CreateContainer within sandbox \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac\"" Jan 30 14:01:57.301497 containerd[2008]: time="2025-01-30T14:01:57.301435411Z" level=info msg="StartContainer for \"9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac\"" Jan 30 14:01:57.364619 systemd[1]: Started cri-containerd-9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac.scope - libcontainer container 9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac. Jan 30 14:01:57.416542 containerd[2008]: time="2025-01-30T14:01:57.416446348Z" level=info msg="StartContainer for \"9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac\" returns successfully" Jan 30 14:01:57.960609 kubelet[3502]: E0130 14:01:57.959820 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:01:58.127237 kubelet[3502]: I0130 14:01:58.127130 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56b88b7f-pvjbt" podStartSLOduration=6.802284045 podStartE2EDuration="11.127107162s" podCreationTimestamp="2025-01-30 14:01:47 +0000 UTC" firstStartedPulling="2025-01-30 14:01:48.370357574 +0000 UTC m=+16.730587131" lastFinishedPulling="2025-01-30 14:01:52.695180691 +0000 UTC m=+21.055410248" observedRunningTime="2025-01-30 14:01:53.100149725 +0000 UTC m=+21.460379318" watchObservedRunningTime="2025-01-30 14:01:58.127107162 +0000 UTC m=+26.487336719" Jan 30 14:01:58.255595 containerd[2008]: time="2025-01-30T14:01:58.255401322Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:01:58.260021 systemd[1]: cri-containerd-9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac.scope: Deactivated successfully. Jan 30 14:01:58.303973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac-rootfs.mount: Deactivated successfully. Jan 30 14:01:58.356359 kubelet[3502]: I0130 14:01:58.356241 3502 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 14:01:58.435017 systemd[1]: Created slice kubepods-burstable-podd3be0557_0e75_487a_81b6_3ddc28a8a3e9.slice - libcontainer container kubepods-burstable-podd3be0557_0e75_487a_81b6_3ddc28a8a3e9.slice. Jan 30 14:01:58.441959 kubelet[3502]: I0130 14:01:58.441582 3502 status_manager.go:890] "Failed to get status for pod" podUID="d3be0557-0e75-487a-81b6-3ddc28a8a3e9" pod="kube-system/coredns-668d6bf9bc-xs7nv" err="pods \"coredns-668d6bf9bc-xs7nv\" is forbidden: User \"system:node:ip-172-31-25-132\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-132' and this object" Jan 30 14:01:58.441959 kubelet[3502]: W0130 14:01:58.441868 3502 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-25-132" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-132' and this object Jan 30 14:01:58.441959 kubelet[3502]: E0130 14:01:58.441913 3502 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-25-132\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-132' and this object" logger="UnhandledError" Jan 30 14:01:58.468605 kubelet[3502]: W0130 14:01:58.467278 3502 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-25-132" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-25-132' and this object Jan 30 14:01:58.468605 kubelet[3502]: E0130 14:01:58.467368 3502 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-25-132\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-25-132' and this object" logger="UnhandledError" Jan 30 14:01:58.468210 systemd[1]: Created slice kubepods-burstable-pod96ccd58a_bb33_448c_b933_754229aff909.slice - libcontainer container kubepods-burstable-pod96ccd58a_bb33_448c_b933_754229aff909.slice. Jan 30 14:01:58.472354 kubelet[3502]: W0130 14:01:58.470161 3502 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-25-132" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-25-132' and this object Jan 30 14:01:58.472354 kubelet[3502]: E0130 14:01:58.470219 3502 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-25-132\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-25-132' and this object" logger="UnhandledError" Jan 30 14:01:58.493113 systemd[1]: Created slice kubepods-besteffort-pod172e4ddf_6abc_4051_bf69_e492ba18c815.slice - libcontainer container kubepods-besteffort-pod172e4ddf_6abc_4051_bf69_e492ba18c815.slice. Jan 30 14:01:58.497279 kubelet[3502]: I0130 14:01:58.497198 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8-calico-apiserver-certs\") pod \"calico-apiserver-7c887dfbf4-2c2xq\" (UID: \"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8\") " pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" Jan 30 14:01:58.497279 kubelet[3502]: I0130 14:01:58.497280 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw7dv\" (UniqueName: \"kubernetes.io/projected/d3be0557-0e75-487a-81b6-3ddc28a8a3e9-kube-api-access-gw7dv\") pod \"coredns-668d6bf9bc-xs7nv\" (UID: \"d3be0557-0e75-487a-81b6-3ddc28a8a3e9\") " pod="kube-system/coredns-668d6bf9bc-xs7nv" Jan 30 14:01:58.499979 kubelet[3502]: I0130 14:01:58.499512 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3be0557-0e75-487a-81b6-3ddc28a8a3e9-config-volume\") pod \"coredns-668d6bf9bc-xs7nv\" (UID: \"d3be0557-0e75-487a-81b6-3ddc28a8a3e9\") " pod="kube-system/coredns-668d6bf9bc-xs7nv" Jan 30 14:01:58.499979 kubelet[3502]: I0130 14:01:58.499641 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzc5\" (UniqueName: \"kubernetes.io/projected/96ccd58a-bb33-448c-b933-754229aff909-kube-api-access-npzc5\") pod \"coredns-668d6bf9bc-whttc\" (UID: \"96ccd58a-bb33-448c-b933-754229aff909\") " pod="kube-system/coredns-668d6bf9bc-whttc" Jan 30 14:01:58.499979 kubelet[3502]: I0130 14:01:58.499698 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/172e4ddf-6abc-4051-bf69-e492ba18c815-tigera-ca-bundle\") pod \"calico-kube-controllers-5cf55bf796-kxvtl\" (UID: \"172e4ddf-6abc-4051-bf69-e492ba18c815\") " pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" Jan 30 14:01:58.499979 kubelet[3502]: I0130 14:01:58.499772 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq78l\" (UniqueName: \"kubernetes.io/projected/0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8-kube-api-access-dq78l\") pod \"calico-apiserver-7c887dfbf4-2c2xq\" (UID: \"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8\") " pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" Jan 30 14:01:58.499979 kubelet[3502]: I0130 14:01:58.499956 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmkx\" (UniqueName: \"kubernetes.io/projected/172e4ddf-6abc-4051-bf69-e492ba18c815-kube-api-access-jrmkx\") pod \"calico-kube-controllers-5cf55bf796-kxvtl\" (UID: \"172e4ddf-6abc-4051-bf69-e492ba18c815\") " pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" Jan 30 14:01:58.500404 kubelet[3502]: I0130 14:01:58.500231 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ccd58a-bb33-448c-b933-754229aff909-config-volume\") pod \"coredns-668d6bf9bc-whttc\" (UID: \"96ccd58a-bb33-448c-b933-754229aff909\") " pod="kube-system/coredns-668d6bf9bc-whttc" Jan 30 14:01:58.516037 systemd[1]: Created slice kubepods-besteffort-pod0a1653fd_bac3_4c9d_83aa_3ff2020f5cf8.slice - libcontainer container kubepods-besteffort-pod0a1653fd_bac3_4c9d_83aa_3ff2020f5cf8.slice. Jan 30 14:01:58.531254 systemd[1]: Created slice kubepods-besteffort-pod5d7b309e_de44_4e46_a503_7451528eddd3.slice - libcontainer container kubepods-besteffort-pod5d7b309e_de44_4e46_a503_7451528eddd3.slice. Jan 30 14:01:58.603000 kubelet[3502]: I0130 14:01:58.601860 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d7b309e-de44-4e46-a503-7451528eddd3-calico-apiserver-certs\") pod \"calico-apiserver-7c887dfbf4-j5ncj\" (UID: \"5d7b309e-de44-4e46-a503-7451528eddd3\") " pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" Jan 30 14:01:58.603000 kubelet[3502]: I0130 14:01:58.601952 3502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlg99\" (UniqueName: \"kubernetes.io/projected/5d7b309e-de44-4e46-a503-7451528eddd3-kube-api-access-tlg99\") pod \"calico-apiserver-7c887dfbf4-j5ncj\" (UID: \"5d7b309e-de44-4e46-a503-7451528eddd3\") " pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" Jan 30 14:01:58.806344 containerd[2008]: time="2025-01-30T14:01:58.806113496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cf55bf796-kxvtl,Uid:172e4ddf-6abc-4051-bf69-e492ba18c815,Namespace:calico-system,Attempt:0,}" Jan 30 14:01:59.573785 containerd[2008]: time="2025-01-30T14:01:59.573478625Z" level=error msg="Failed to destroy network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:01:59.577109 containerd[2008]: time="2025-01-30T14:01:59.576941071Z" level=error msg="encountered an error cleaning up failed sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:01:59.577618 containerd[2008]: time="2025-01-30T14:01:59.577073797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cf55bf796-kxvtl,Uid:172e4ddf-6abc-4051-bf69-e492ba18c815,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:01:59.578654 kubelet[3502]: E0130 14:01:59.577981 3502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:01:59.578654 kubelet[3502]: E0130 14:01:59.578089 3502 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" Jan 30 14:01:59.578654 kubelet[3502]: E0130 14:01:59.578124 3502 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" Jan 30 14:01:59.579349 kubelet[3502]: E0130 14:01:59.578188 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cf55bf796-kxvtl_calico-system(172e4ddf-6abc-4051-bf69-e492ba18c815)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cf55bf796-kxvtl_calico-system(172e4ddf-6abc-4051-bf69-e492ba18c815)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" podUID="172e4ddf-6abc-4051-bf69-e492ba18c815" Jan 30 14:01:59.580379 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d-shm.mount: Deactivated successfully. Jan 30 14:01:59.601856 kubelet[3502]: E0130 14:01:59.601791 3502 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.602032 kubelet[3502]: E0130 14:01:59.601928 3502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3be0557-0e75-487a-81b6-3ddc28a8a3e9-config-volume podName:d3be0557-0e75-487a-81b6-3ddc28a8a3e9 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:00.10189487 +0000 UTC m=+28.462124439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d3be0557-0e75-487a-81b6-3ddc28a8a3e9-config-volume") pod "coredns-668d6bf9bc-xs7nv" (UID: "d3be0557-0e75-487a-81b6-3ddc28a8a3e9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.603484 kubelet[3502]: E0130 14:01:59.603380 3502 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.603484 kubelet[3502]: E0130 14:01:59.603486 3502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96ccd58a-bb33-448c-b933-754229aff909-config-volume podName:96ccd58a-bb33-448c-b933-754229aff909 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:00.103461449 +0000 UTC m=+28.463691006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/96ccd58a-bb33-448c-b933-754229aff909-config-volume") pod "coredns-668d6bf9bc-whttc" (UID: "96ccd58a-bb33-448c-b933-754229aff909") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.603764 kubelet[3502]: E0130 14:01:59.603386 3502 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 14:01:59.603764 kubelet[3502]: E0130 14:01:59.603564 3502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8-calico-apiserver-certs podName:0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:00.103549165 +0000 UTC m=+28.463778722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8-calico-apiserver-certs") pod "calico-apiserver-7c887dfbf4-2c2xq" (UID: "0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:01:59.652197 kubelet[3502]: E0130 14:01:59.652140 3502 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.652197 kubelet[3502]: E0130 14:01:59.652195 3502 projected.go:194] Error preparing data for projected volume kube-api-access-dq78l for pod calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.652453 kubelet[3502]: E0130 14:01:59.652283 3502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8-kube-api-access-dq78l podName:0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:00.152256691 +0000 UTC m=+28.512486236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dq78l" (UniqueName: "kubernetes.io/projected/0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8-kube-api-access-dq78l") pod "calico-apiserver-7c887dfbf4-2c2xq" (UID: "0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.664545 containerd[2008]: time="2025-01-30T14:01:59.664444017Z" level=info msg="shim disconnected" id=9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac namespace=k8s.io Jan 30 14:01:59.664545 containerd[2008]: time="2025-01-30T14:01:59.664540557Z" level=warning msg="cleaning up after shim disconnected" id=9c8bc12bb4edd254c244b18871e05a53adeef81b323a8c5ebccdde36c491f4ac namespace=k8s.io Jan 30 14:01:59.664759 containerd[2008]: time="2025-01-30T14:01:59.664563657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:59.703892 kubelet[3502]: E0130 14:01:59.703828 3502 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 14:01:59.704131 kubelet[3502]: E0130 14:01:59.703939 3502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d7b309e-de44-4e46-a503-7451528eddd3-calico-apiserver-certs podName:5d7b309e-de44-4e46-a503-7451528eddd3 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:00.203911906 +0000 UTC m=+28.564141463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/5d7b309e-de44-4e46-a503-7451528eddd3-calico-apiserver-certs") pod "calico-apiserver-7c887dfbf4-j5ncj" (UID: "5d7b309e-de44-4e46-a503-7451528eddd3") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:01:59.714360 kubelet[3502]: E0130 14:01:59.713911 3502 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.714360 kubelet[3502]: E0130 14:01:59.713962 3502 projected.go:194] Error preparing data for projected volume kube-api-access-tlg99 for pod calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.714360 kubelet[3502]: E0130 14:01:59.714040 3502 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d7b309e-de44-4e46-a503-7451528eddd3-kube-api-access-tlg99 podName:5d7b309e-de44-4e46-a503-7451528eddd3 nodeName:}" failed. No retries permitted until 2025-01-30 14:02:00.21401457 +0000 UTC m=+28.574244115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tlg99" (UniqueName: "kubernetes.io/projected/5d7b309e-de44-4e46-a503-7451528eddd3-kube-api-access-tlg99") pod "calico-apiserver-7c887dfbf4-j5ncj" (UID: "5d7b309e-de44-4e46-a503-7451528eddd3") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:01:59.966481 systemd[1]: Created slice kubepods-besteffort-podb56eeb33_9e79_4757_a325_1b7299a49fcc.slice - libcontainer container kubepods-besteffort-podb56eeb33_9e79_4757_a325_1b7299a49fcc.slice. Jan 30 14:01:59.971032 containerd[2008]: time="2025-01-30T14:01:59.970945205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdvb4,Uid:b56eeb33-9e79-4757-a325-1b7299a49fcc,Namespace:calico-system,Attempt:0,}" Jan 30 14:02:00.077965 containerd[2008]: time="2025-01-30T14:02:00.077735707Z" level=error msg="Failed to destroy network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.078866 containerd[2008]: time="2025-01-30T14:02:00.078728459Z" level=error msg="encountered an error cleaning up failed sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.079185 containerd[2008]: time="2025-01-30T14:02:00.078842096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdvb4,Uid:b56eeb33-9e79-4757-a325-1b7299a49fcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.081768 kubelet[3502]: E0130 14:02:00.079506 3502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.081768 kubelet[3502]: E0130 14:02:00.079586 3502 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:02:00.081768 kubelet[3502]: E0130 14:02:00.079621 3502 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qdvb4" Jan 30 14:02:00.082108 kubelet[3502]: E0130 14:02:00.079687 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qdvb4_calico-system(b56eeb33-9e79-4757-a325-1b7299a49fcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qdvb4_calico-system(b56eeb33-9e79-4757-a325-1b7299a49fcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:02:00.110146 containerd[2008]: time="2025-01-30T14:02:00.110015375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:02:00.115903 kubelet[3502]: I0130 14:02:00.114252 3502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:00.123611 containerd[2008]: time="2025-01-30T14:02:00.121673717Z" level=info msg="StopPodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\"" Jan 30 14:02:00.124333 containerd[2008]: time="2025-01-30T14:02:00.124151767Z" level=info msg="Ensure that sandbox 8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a in task-service has been cleanup successfully" Jan 30 14:02:00.131806 kubelet[3502]: I0130 14:02:00.131720 3502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:00.135783 containerd[2008]: time="2025-01-30T14:02:00.133942048Z" level=info msg="StopPodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\"" Jan 30 14:02:00.139104 containerd[2008]: time="2025-01-30T14:02:00.137030544Z" level=info msg="Ensure that sandbox 555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d in task-service has been cleanup successfully" Jan 30 14:02:00.207989 containerd[2008]: time="2025-01-30T14:02:00.207794689Z" level=error msg="StopPodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" failed" error="failed to destroy network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.208965 kubelet[3502]: E0130 14:02:00.208839 3502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:00.209375 containerd[2008]: time="2025-01-30T14:02:00.209115012Z" level=error msg="StopPodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" failed" error="failed to destroy network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.209596 kubelet[3502]: E0130 14:02:00.209161 3502 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d"} Jan 30 14:02:00.209596 kubelet[3502]: E0130 14:02:00.209362 3502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:00.209596 kubelet[3502]: E0130 14:02:00.209407 3502 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a"} Jan 30 14:02:00.209596 kubelet[3502]: E0130 14:02:00.209455 3502 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b56eeb33-9e79-4757-a325-1b7299a49fcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:02:00.209596 kubelet[3502]: E0130 14:02:00.209295 3502 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"172e4ddf-6abc-4051-bf69-e492ba18c815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:02:00.209963 kubelet[3502]: E0130 14:02:00.209494 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b56eeb33-9e79-4757-a325-1b7299a49fcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qdvb4" podUID="b56eeb33-9e79-4757-a325-1b7299a49fcc" Jan 30 14:02:00.210118 kubelet[3502]: E0130 14:02:00.209525 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"172e4ddf-6abc-4051-bf69-e492ba18c815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" podUID="172e4ddf-6abc-4051-bf69-e492ba18c815" Jan 30 14:02:00.259449 containerd[2008]: time="2025-01-30T14:02:00.259158758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs7nv,Uid:d3be0557-0e75-487a-81b6-3ddc28a8a3e9,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:00.289006 containerd[2008]: time="2025-01-30T14:02:00.288585176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-whttc,Uid:96ccd58a-bb33-448c-b933-754229aff909,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:00.327583 containerd[2008]: time="2025-01-30T14:02:00.326928126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-2c2xq,Uid:0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:02:00.343375 containerd[2008]: time="2025-01-30T14:02:00.343168403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-j5ncj,Uid:5d7b309e-de44-4e46-a503-7451528eddd3,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:02:00.452975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a-shm.mount: Deactivated successfully. Jan 30 14:02:00.496239 containerd[2008]: time="2025-01-30T14:02:00.496070516Z" level=error msg="Failed to destroy network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.501249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6-shm.mount: Deactivated successfully. Jan 30 14:02:00.506133 containerd[2008]: time="2025-01-30T14:02:00.506066112Z" level=error msg="encountered an error cleaning up failed sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.508107 containerd[2008]: time="2025-01-30T14:02:00.506376455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs7nv,Uid:d3be0557-0e75-487a-81b6-3ddc28a8a3e9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.508272 kubelet[3502]: E0130 14:02:00.506669 3502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.508272 kubelet[3502]: E0130 14:02:00.506751 3502 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xs7nv" Jan 30 14:02:00.508272 kubelet[3502]: E0130 14:02:00.506785 3502 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xs7nv" Jan 30 14:02:00.508501 kubelet[3502]: E0130 14:02:00.506848 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xs7nv_kube-system(d3be0557-0e75-487a-81b6-3ddc28a8a3e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xs7nv_kube-system(d3be0557-0e75-487a-81b6-3ddc28a8a3e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xs7nv" podUID="d3be0557-0e75-487a-81b6-3ddc28a8a3e9" Jan 30 14:02:00.551447 containerd[2008]: time="2025-01-30T14:02:00.547923530Z" level=error msg="Failed to destroy network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.554672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6-shm.mount: Deactivated successfully. Jan 30 14:02:00.555644 containerd[2008]: time="2025-01-30T14:02:00.555462278Z" level=error msg="encountered an error cleaning up failed sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.555978 containerd[2008]: time="2025-01-30T14:02:00.555768851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-whttc,Uid:96ccd58a-bb33-448c-b933-754229aff909,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.556805 kubelet[3502]: E0130 14:02:00.556619 3502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.556805 kubelet[3502]: E0130 14:02:00.556703 3502 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-whttc" Jan 30 14:02:00.556805 kubelet[3502]: E0130 14:02:00.556738 3502 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-whttc" Jan 30 14:02:00.557072 kubelet[3502]: E0130 14:02:00.556801 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-whttc_kube-system(96ccd58a-bb33-448c-b933-754229aff909)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-whttc_kube-system(96ccd58a-bb33-448c-b933-754229aff909)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-whttc" podUID="96ccd58a-bb33-448c-b933-754229aff909" Jan 30 14:02:00.573872 containerd[2008]: time="2025-01-30T14:02:00.573794191Z" level=error msg="Failed to destroy network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.575839 containerd[2008]: time="2025-01-30T14:02:00.575758721Z" level=error msg="encountered an error cleaning up failed sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.580659 containerd[2008]: time="2025-01-30T14:02:00.575865214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-2c2xq,Uid:0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.580953 kubelet[3502]: E0130 14:02:00.576158 3502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.580953 kubelet[3502]: E0130 14:02:00.576231 3502 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" Jan 30 14:02:00.580953 kubelet[3502]: E0130 14:02:00.576264 3502 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" Jan 30 14:02:00.584965 kubelet[3502]: E0130 14:02:00.577155 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c887dfbf4-2c2xq_calico-apiserver(0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c887dfbf4-2c2xq_calico-apiserver(0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" podUID="0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8" Jan 30 14:02:00.583175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e-shm.mount: Deactivated successfully. Jan 30 14:02:00.619072 containerd[2008]: time="2025-01-30T14:02:00.618965842Z" level=error msg="Failed to destroy network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.622717 containerd[2008]: time="2025-01-30T14:02:00.620541918Z" level=error msg="encountered an error cleaning up failed sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.622717 containerd[2008]: time="2025-01-30T14:02:00.622464223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-j5ncj,Uid:5d7b309e-de44-4e46-a503-7451528eddd3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.622985 kubelet[3502]: E0130 14:02:00.622785 3502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:00.622985 kubelet[3502]: E0130 14:02:00.622866 3502 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" Jan 30 14:02:00.622985 kubelet[3502]: E0130 14:02:00.622903 3502 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" Jan 30 14:02:00.623167 kubelet[3502]: E0130 14:02:00.622966 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c887dfbf4-j5ncj_calico-apiserver(5d7b309e-de44-4e46-a503-7451528eddd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c887dfbf4-j5ncj_calico-apiserver(5d7b309e-de44-4e46-a503-7451528eddd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" podUID="5d7b309e-de44-4e46-a503-7451528eddd3" Jan 30 14:02:00.625619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb-shm.mount: Deactivated successfully. Jan 30 14:02:01.136908 kubelet[3502]: I0130 14:02:01.135915 3502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:01.137050 containerd[2008]: time="2025-01-30T14:02:01.136822973Z" level=info msg="StopPodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\"" Jan 30 14:02:01.137554 containerd[2008]: time="2025-01-30T14:02:01.137126413Z" level=info msg="Ensure that sandbox af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb in task-service has been cleanup successfully" Jan 30 14:02:01.140000 kubelet[3502]: I0130 14:02:01.139935 3502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:01.141039 containerd[2008]: time="2025-01-30T14:02:01.140975356Z" level=info msg="StopPodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\"" Jan 30 14:02:01.142389 containerd[2008]: time="2025-01-30T14:02:01.142239648Z" level=info msg="Ensure that sandbox 4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6 in task-service has been cleanup successfully" Jan 30 14:02:01.158927 kubelet[3502]: I0130 14:02:01.158869 3502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:01.164355 containerd[2008]: time="2025-01-30T14:02:01.163748829Z" level=info msg="StopPodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\"" Jan 30 14:02:01.164355 containerd[2008]: time="2025-01-30T14:02:01.164028377Z" level=info msg="Ensure that sandbox 6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6 in task-service has been cleanup successfully" Jan 30 14:02:01.166998 kubelet[3502]: I0130 14:02:01.166947 3502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:01.172988 containerd[2008]: time="2025-01-30T14:02:01.172914930Z" level=info msg="StopPodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\"" Jan 30 14:02:01.173345 containerd[2008]: time="2025-01-30T14:02:01.173213159Z" level=info msg="Ensure that sandbox ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e in task-service has been cleanup successfully" Jan 30 14:02:01.267396 containerd[2008]: time="2025-01-30T14:02:01.267262317Z" level=error msg="StopPodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" failed" error="failed to destroy network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:01.268668 kubelet[3502]: E0130 14:02:01.268434 3502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:01.268668 kubelet[3502]: E0130 14:02:01.268510 3502 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6"} Jan 30 14:02:01.268668 kubelet[3502]: E0130 14:02:01.268576 3502 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3be0557-0e75-487a-81b6-3ddc28a8a3e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:02:01.268668 kubelet[3502]: E0130 14:02:01.268616 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3be0557-0e75-487a-81b6-3ddc28a8a3e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xs7nv" podUID="d3be0557-0e75-487a-81b6-3ddc28a8a3e9" Jan 30 14:02:01.271633 containerd[2008]: time="2025-01-30T14:02:01.271567236Z" level=error msg="StopPodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" failed" error="failed to destroy network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:01.272861 kubelet[3502]: E0130 14:02:01.272283 3502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:01.272861 kubelet[3502]: E0130 14:02:01.272442 3502 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb"} Jan 30 14:02:01.272861 kubelet[3502]: E0130 14:02:01.272526 3502 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d7b309e-de44-4e46-a503-7451528eddd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:02:01.272861 kubelet[3502]: E0130 14:02:01.272573 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d7b309e-de44-4e46-a503-7451528eddd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" podUID="5d7b309e-de44-4e46-a503-7451528eddd3" Jan 30 14:02:01.276253 containerd[2008]: time="2025-01-30T14:02:01.276128820Z" level=error msg="StopPodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" failed" error="failed to destroy network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:01.276879 kubelet[3502]: E0130 14:02:01.276496 3502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:01.276879 kubelet[3502]: E0130 14:02:01.276566 3502 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6"} Jan 30 14:02:01.276879 kubelet[3502]: E0130 14:02:01.276620 3502 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96ccd58a-bb33-448c-b933-754229aff909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:02:01.276879 kubelet[3502]: E0130 14:02:01.276663 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96ccd58a-bb33-448c-b933-754229aff909\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-whttc" podUID="96ccd58a-bb33-448c-b933-754229aff909" Jan 30 14:02:01.280904 kubelet[3502]: I0130 14:02:01.280108 3502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:02:01.296922 containerd[2008]: time="2025-01-30T14:02:01.296804365Z" level=error msg="StopPodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" failed" error="failed to destroy network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:02:01.297445 kubelet[3502]: E0130 14:02:01.297329 3502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:01.297445 kubelet[3502]: E0130 14:02:01.297407 3502 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e"} Jan 30 14:02:01.297600 kubelet[3502]: E0130 14:02:01.297464 3502 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:02:01.297600 kubelet[3502]: E0130 14:02:01.297509 3502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" podUID="0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8" Jan 30 14:02:08.074749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555745854.mount: Deactivated successfully. Jan 30 14:02:08.161148 containerd[2008]: time="2025-01-30T14:02:08.159733550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:08.161989 containerd[2008]: time="2025-01-30T14:02:08.161943543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:02:08.163542 containerd[2008]: time="2025-01-30T14:02:08.163495823Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:08.167088 containerd[2008]: time="2025-01-30T14:02:08.167034759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:08.168538 containerd[2008]: time="2025-01-30T14:02:08.168471025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.05836402s" Jan 30 14:02:08.168649 containerd[2008]: time="2025-01-30T14:02:08.168542473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:02:08.208726 containerd[2008]: time="2025-01-30T14:02:08.208675794Z" level=info msg="CreateContainer within sandbox \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:02:08.235770 containerd[2008]: time="2025-01-30T14:02:08.235686028Z" level=info msg="CreateContainer within sandbox \"f44cc2341dd22ffafabe9076273915aacddd1bb5277d2bd28b30ce70df05bb84\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5c95b25b13ca27b9a769be4d1f0241166ff30d34bfe6ce7aa6bff39118980aa1\"" Jan 30 14:02:08.237535 containerd[2008]: time="2025-01-30T14:02:08.237364959Z" level=info msg="StartContainer for \"5c95b25b13ca27b9a769be4d1f0241166ff30d34bfe6ce7aa6bff39118980aa1\"" Jan 30 14:02:08.289636 systemd[1]: Started cri-containerd-5c95b25b13ca27b9a769be4d1f0241166ff30d34bfe6ce7aa6bff39118980aa1.scope - libcontainer container 5c95b25b13ca27b9a769be4d1f0241166ff30d34bfe6ce7aa6bff39118980aa1. Jan 30 14:02:08.351971 containerd[2008]: time="2025-01-30T14:02:08.351718880Z" level=info msg="StartContainer for \"5c95b25b13ca27b9a769be4d1f0241166ff30d34bfe6ce7aa6bff39118980aa1\" returns successfully" Jan 30 14:02:08.479415 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:02:08.479600 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:02:10.428425 kernel: bpftool[4763]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:02:10.972905 systemd-networkd[1917]: vxlan.calico: Link UP Jan 30 14:02:10.972920 systemd-networkd[1917]: vxlan.calico: Gained carrier Jan 30 14:02:10.972991 (udev-worker)[4594]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:11.021955 (udev-worker)[4597]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:11.954964 containerd[2008]: time="2025-01-30T14:02:11.954739123Z" level=info msg="StopPodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\"" Jan 30 14:02:12.120172 kubelet[3502]: I0130 14:02:12.119606 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pm8v8" podStartSLOduration=5.263301767 podStartE2EDuration="25.1195737s" podCreationTimestamp="2025-01-30 14:01:47 +0000 UTC" firstStartedPulling="2025-01-30 14:01:48.313777497 +0000 UTC m=+16.674007054" lastFinishedPulling="2025-01-30 14:02:08.170049442 +0000 UTC m=+36.530278987" observedRunningTime="2025-01-30 14:02:09.246681268 +0000 UTC m=+37.606910849" watchObservedRunningTime="2025-01-30 14:02:12.1195737 +0000 UTC m=+40.479803270" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.118 [INFO][4889] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.120 [INFO][4889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" iface="eth0" netns="/var/run/netns/cni-3931ab84-4060-e327-e0d4-2d92b9a6b590" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.123 [INFO][4889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" iface="eth0" netns="/var/run/netns/cni-3931ab84-4060-e327-e0d4-2d92b9a6b590" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.126 [INFO][4889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" iface="eth0" netns="/var/run/netns/cni-3931ab84-4060-e327-e0d4-2d92b9a6b590" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.127 [INFO][4889] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.127 [INFO][4889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.166 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.167 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.167 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.179 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.179 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.182 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:12.189749 containerd[2008]: 2025-01-30 14:02:12.187 [INFO][4889] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:12.191206 containerd[2008]: time="2025-01-30T14:02:12.189921982Z" level=info msg="TearDown network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" successfully" Jan 30 14:02:12.191206 containerd[2008]: time="2025-01-30T14:02:12.189964123Z" level=info msg="StopPodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" returns successfully" Jan 30 14:02:12.197838 containerd[2008]: time="2025-01-30T14:02:12.195657164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cf55bf796-kxvtl,Uid:172e4ddf-6abc-4051-bf69-e492ba18c815,Namespace:calico-system,Attempt:1,}" Jan 30 14:02:12.195824 systemd[1]: run-netns-cni\x2d3931ab84\x2d4060\x2de327\x2de0d4\x2d2d92b9a6b590.mount: Deactivated successfully. Jan 30 14:02:12.346793 systemd-networkd[1917]: vxlan.calico: Gained IPv6LL Jan 30 14:02:12.563703 systemd-networkd[1917]: cali0c935a718e2: Link UP Jan 30 14:02:12.564578 systemd-networkd[1917]: cali0c935a718e2: Gained carrier Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.304 [INFO][4904] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0 calico-kube-controllers-5cf55bf796- calico-system 172e4ddf-6abc-4051-bf69-e492ba18c815 777 0 2025-01-30 14:01:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cf55bf796 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-132 calico-kube-controllers-5cf55bf796-kxvtl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c935a718e2 [] []}} ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.307 [INFO][4904] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.385 [INFO][4915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" HandleID="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.403 [INFO][4915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" HandleID="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003849a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-132", "pod":"calico-kube-controllers-5cf55bf796-kxvtl", "timestamp":"2025-01-30 14:02:12.385156195 +0000 UTC"}, Hostname:"ip-172-31-25-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.403 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.403 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.403 [INFO][4915] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-132' Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.416 [INFO][4915] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.500 [INFO][4915] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.512 [INFO][4915] ipam/ipam.go 489: Trying affinity for 192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.515 [INFO][4915] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.520 [INFO][4915] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.520 [INFO][4915] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.524 [INFO][4915] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2 Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.533 [INFO][4915] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.547 [INFO][4915] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.193/26] block=192.168.35.192/26 handle="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.548 [INFO][4915] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.193/26] handle="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" host="ip-172-31-25-132" Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.548 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:12.602605 containerd[2008]: 2025-01-30 14:02:12.548 [INFO][4915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.193/26] IPv6=[] ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" HandleID="k8s-pod-network.a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.610724 containerd[2008]: 2025-01-30 14:02:12.552 [INFO][4904] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0", GenerateName:"calico-kube-controllers-5cf55bf796-", Namespace:"calico-system", SelfLink:"", UID:"172e4ddf-6abc-4051-bf69-e492ba18c815", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cf55bf796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"", Pod:"calico-kube-controllers-5cf55bf796-kxvtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c935a718e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:12.610724 containerd[2008]: 2025-01-30 14:02:12.553 [INFO][4904] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.193/32] ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.610724 containerd[2008]: 2025-01-30 14:02:12.553 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c935a718e2 ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.610724 containerd[2008]: 2025-01-30 14:02:12.565 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.610724 containerd[2008]: 2025-01-30 14:02:12.566 [INFO][4904] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0", GenerateName:"calico-kube-controllers-5cf55bf796-", Namespace:"calico-system", SelfLink:"", UID:"172e4ddf-6abc-4051-bf69-e492ba18c815", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cf55bf796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2", Pod:"calico-kube-controllers-5cf55bf796-kxvtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c935a718e2", MAC:"a2:f1:65:bc:3f:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:12.610724 containerd[2008]: 2025-01-30 14:02:12.592 [INFO][4904] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2" Namespace="calico-system" Pod="calico-kube-controllers-5cf55bf796-kxvtl" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:12.705124 containerd[2008]: time="2025-01-30T14:02:12.701658958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:12.705124 containerd[2008]: time="2025-01-30T14:02:12.701751128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:12.705124 containerd[2008]: time="2025-01-30T14:02:12.701783292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:12.705124 containerd[2008]: time="2025-01-30T14:02:12.701930365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:12.801382 systemd[1]: Started cri-containerd-a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2.scope - libcontainer container a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2. Jan 30 14:02:12.947739 containerd[2008]: time="2025-01-30T14:02:12.947618186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cf55bf796-kxvtl,Uid:172e4ddf-6abc-4051-bf69-e492ba18c815,Namespace:calico-system,Attempt:1,} returns sandbox id \"a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2\"" Jan 30 14:02:12.951352 containerd[2008]: time="2025-01-30T14:02:12.951171217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:02:12.954849 containerd[2008]: time="2025-01-30T14:02:12.954547881Z" level=info msg="StopPodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\"" Jan 30 14:02:13.044075 systemd[1]: Started sshd@9-172.31.25.132:22-139.178.89.65:35532.service - OpenSSH per-connection server daemon (139.178.89.65:35532). Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.091 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.091 [INFO][4994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" iface="eth0" netns="/var/run/netns/cni-924958d2-6a9b-b228-f0f3-3ef27417d573" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.091 [INFO][4994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" iface="eth0" netns="/var/run/netns/cni-924958d2-6a9b-b228-f0f3-3ef27417d573" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.092 [INFO][4994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" iface="eth0" netns="/var/run/netns/cni-924958d2-6a9b-b228-f0f3-3ef27417d573" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.092 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.092 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.145 [INFO][5002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.146 [INFO][5002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.146 [INFO][5002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.159 [WARNING][5002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.159 [INFO][5002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.162 [INFO][5002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:13.167470 containerd[2008]: 2025-01-30 14:02:13.164 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:13.169908 containerd[2008]: time="2025-01-30T14:02:13.167742386Z" level=info msg="TearDown network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" successfully" Jan 30 14:02:13.169908 containerd[2008]: time="2025-01-30T14:02:13.167781790Z" level=info msg="StopPodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" returns successfully" Jan 30 14:02:13.169908 containerd[2008]: time="2025-01-30T14:02:13.168965005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdvb4,Uid:b56eeb33-9e79-4757-a325-1b7299a49fcc,Namespace:calico-system,Attempt:1,}" Jan 30 14:02:13.197129 systemd[1]: run-netns-cni\x2d924958d2\x2d6a9b\x2db228\x2df0f3\x2d3ef27417d573.mount: Deactivated successfully. Jan 30 14:02:13.310989 sshd[5001]: Accepted publickey for core from 139.178.89.65 port 35532 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:13.317886 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:13.331230 systemd-logind[1994]: New session 10 of user core. Jan 30 14:02:13.340655 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:02:13.588282 systemd-networkd[1917]: calie8cbc2de079: Link UP Jan 30 14:02:13.588712 systemd-networkd[1917]: calie8cbc2de079: Gained carrier Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.401 [INFO][5010] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0 csi-node-driver- calico-system b56eeb33-9e79-4757-a325-1b7299a49fcc 816 0 2025-01-30 14:01:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-132 csi-node-driver-qdvb4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie8cbc2de079 [] []}} ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.401 [INFO][5010] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.477 [INFO][5023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" HandleID="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.506 [INFO][5023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" HandleID="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319430), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-132", "pod":"csi-node-driver-qdvb4", "timestamp":"2025-01-30 14:02:13.477553713 +0000 UTC"}, Hostname:"ip-172-31-25-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.506 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.506 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.506 [INFO][5023] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-132' Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.509 [INFO][5023] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.518 [INFO][5023] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.527 [INFO][5023] ipam/ipam.go 489: Trying affinity for 192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.532 [INFO][5023] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.538 [INFO][5023] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.538 [INFO][5023] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.541 [INFO][5023] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7 Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.555 [INFO][5023] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.574 [INFO][5023] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.194/26] block=192.168.35.192/26 handle="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.574 [INFO][5023] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.194/26] handle="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" host="ip-172-31-25-132" Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.574 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:13.646570 containerd[2008]: 2025-01-30 14:02:13.574 [INFO][5023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.194/26] IPv6=[] ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" HandleID="k8s-pod-network.740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.652683 containerd[2008]: 2025-01-30 14:02:13.578 [INFO][5010] cni-plugin/k8s.go 386: Populated endpoint ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b56eeb33-9e79-4757-a325-1b7299a49fcc", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"", Pod:"csi-node-driver-qdvb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cbc2de079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:13.652683 containerd[2008]: 2025-01-30 14:02:13.578 [INFO][5010] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.194/32] ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.652683 containerd[2008]: 2025-01-30 14:02:13.578 [INFO][5010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8cbc2de079 ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.652683 containerd[2008]: 2025-01-30 14:02:13.589 [INFO][5010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.652683 containerd[2008]: 2025-01-30 14:02:13.590 [INFO][5010] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b56eeb33-9e79-4757-a325-1b7299a49fcc", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7", Pod:"csi-node-driver-qdvb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cbc2de079", MAC:"86:6e:0e:36:8b:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:13.652683 containerd[2008]: 2025-01-30 14:02:13.632 [INFO][5010] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7" Namespace="calico-system" Pod="csi-node-driver-qdvb4" WorkloadEndpoint="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:13.716900 containerd[2008]: time="2025-01-30T14:02:13.716685092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:13.718241 containerd[2008]: time="2025-01-30T14:02:13.716814253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:13.718241 containerd[2008]: time="2025-01-30T14:02:13.717948688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:13.718241 containerd[2008]: time="2025-01-30T14:02:13.718158637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:13.719662 sshd[5001]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:13.744051 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:02:13.754207 systemd[1]: sshd@9-172.31.25.132:22-139.178.89.65:35532.service: Deactivated successfully. Jan 30 14:02:13.778139 systemd-logind[1994]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:02:13.791037 systemd[1]: Started cri-containerd-740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7.scope - libcontainer container 740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7. Jan 30 14:02:13.793198 systemd-logind[1994]: Removed session 10. Jan 30 14:02:13.845160 containerd[2008]: time="2025-01-30T14:02:13.845001739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdvb4,Uid:b56eeb33-9e79-4757-a325-1b7299a49fcc,Namespace:calico-system,Attempt:1,} returns sandbox id \"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7\"" Jan 30 14:02:13.958334 containerd[2008]: time="2025-01-30T14:02:13.957798650Z" level=info msg="StopPodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\"" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.046 [INFO][5105] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.050 [INFO][5105] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" iface="eth0" netns="/var/run/netns/cni-de15fc9e-1997-66bf-9eb5-34251175b738" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.050 [INFO][5105] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" iface="eth0" netns="/var/run/netns/cni-de15fc9e-1997-66bf-9eb5-34251175b738" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.051 [INFO][5105] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" iface="eth0" netns="/var/run/netns/cni-de15fc9e-1997-66bf-9eb5-34251175b738" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.051 [INFO][5105] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.051 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.091 [INFO][5111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.091 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.091 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.107 [WARNING][5111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.107 [INFO][5111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.109 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:14.117062 containerd[2008]: 2025-01-30 14:02:14.112 [INFO][5105] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:14.118609 containerd[2008]: time="2025-01-30T14:02:14.118141243Z" level=info msg="TearDown network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" successfully" Jan 30 14:02:14.118609 containerd[2008]: time="2025-01-30T14:02:14.118191092Z" level=info msg="StopPodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" returns successfully" Jan 30 14:02:14.119784 containerd[2008]: time="2025-01-30T14:02:14.119462107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-whttc,Uid:96ccd58a-bb33-448c-b933-754229aff909,Namespace:kube-system,Attempt:1,}" Jan 30 14:02:14.202152 systemd[1]: run-netns-cni\x2dde15fc9e\x2d1997\x2d66bf\x2d9eb5\x2d34251175b738.mount: Deactivated successfully. Jan 30 14:02:14.368794 systemd-networkd[1917]: cali425738c1df1: Link UP Jan 30 14:02:14.374836 systemd-networkd[1917]: cali425738c1df1: Gained carrier Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.229 [INFO][5118] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0 coredns-668d6bf9bc- kube-system 96ccd58a-bb33-448c-b933-754229aff909 824 0 2025-01-30 14:01:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-132 coredns-668d6bf9bc-whttc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali425738c1df1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.230 [INFO][5118] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.282 [INFO][5128] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" HandleID="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.298 [INFO][5128] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" HandleID="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-132", "pod":"coredns-668d6bf9bc-whttc", "timestamp":"2025-01-30 14:02:14.28215096 +0000 UTC"}, Hostname:"ip-172-31-25-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.298 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.298 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.299 [INFO][5128] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-132' Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.303 [INFO][5128] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.309 [INFO][5128] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.317 [INFO][5128] ipam/ipam.go 489: Trying affinity for 192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.320 [INFO][5128] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.324 [INFO][5128] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.324 [INFO][5128] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.327 [INFO][5128] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120 Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.337 [INFO][5128] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.347 [INFO][5128] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.195/26] block=192.168.35.192/26 handle="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.347 [INFO][5128] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.195/26] handle="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" host="ip-172-31-25-132" Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.347 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:14.409352 containerd[2008]: 2025-01-30 14:02:14.347 [INFO][5128] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.195/26] IPv6=[] ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" HandleID="k8s-pod-network.427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.412090 containerd[2008]: 2025-01-30 14:02:14.351 [INFO][5118] cni-plugin/k8s.go 386: Populated endpoint ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"96ccd58a-bb33-448c-b933-754229aff909", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"", Pod:"coredns-668d6bf9bc-whttc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali425738c1df1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:14.412090 containerd[2008]: 2025-01-30 14:02:14.352 [INFO][5118] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.195/32] ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.412090 containerd[2008]: 2025-01-30 14:02:14.352 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali425738c1df1 ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.412090 containerd[2008]: 2025-01-30 14:02:14.374 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.412090 containerd[2008]: 2025-01-30 14:02:14.382 [INFO][5118] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"96ccd58a-bb33-448c-b933-754229aff909", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120", Pod:"coredns-668d6bf9bc-whttc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali425738c1df1", MAC:"9a:74:82:27:3d:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:14.412090 containerd[2008]: 2025-01-30 14:02:14.404 [INFO][5118] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120" Namespace="kube-system" Pod="coredns-668d6bf9bc-whttc" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:14.474521 containerd[2008]: time="2025-01-30T14:02:14.473808838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:14.474521 containerd[2008]: time="2025-01-30T14:02:14.473903818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:14.474521 containerd[2008]: time="2025-01-30T14:02:14.473929775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:14.474521 containerd[2008]: time="2025-01-30T14:02:14.474087185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:14.521571 systemd-networkd[1917]: cali0c935a718e2: Gained IPv6LL Jan 30 14:02:14.539797 systemd[1]: run-containerd-runc-k8s.io-427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120-runc.Vp4yBe.mount: Deactivated successfully. Jan 30 14:02:14.553645 systemd[1]: Started cri-containerd-427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120.scope - libcontainer container 427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120. Jan 30 14:02:14.640799 containerd[2008]: time="2025-01-30T14:02:14.640546967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-whttc,Uid:96ccd58a-bb33-448c-b933-754229aff909,Namespace:kube-system,Attempt:1,} returns sandbox id \"427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120\"" Jan 30 14:02:14.674679 containerd[2008]: time="2025-01-30T14:02:14.674581084Z" level=info msg="CreateContainer within sandbox \"427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:02:14.737796 containerd[2008]: time="2025-01-30T14:02:14.737618695Z" level=info msg="CreateContainer within sandbox \"427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c312ba84ebdb9998d375b53f57eb18997a9789cbd822bd07abfab3c3bf125d8\"" Jan 30 14:02:14.741105 containerd[2008]: time="2025-01-30T14:02:14.741028795Z" level=info msg="StartContainer for \"6c312ba84ebdb9998d375b53f57eb18997a9789cbd822bd07abfab3c3bf125d8\"" Jan 30 14:02:14.803110 systemd[1]: Started cri-containerd-6c312ba84ebdb9998d375b53f57eb18997a9789cbd822bd07abfab3c3bf125d8.scope - libcontainer container 6c312ba84ebdb9998d375b53f57eb18997a9789cbd822bd07abfab3c3bf125d8. Jan 30 14:02:14.861534 containerd[2008]: time="2025-01-30T14:02:14.861412430Z" level=info msg="StartContainer for \"6c312ba84ebdb9998d375b53f57eb18997a9789cbd822bd07abfab3c3bf125d8\" returns successfully" Jan 30 14:02:14.954928 containerd[2008]: time="2025-01-30T14:02:14.954857782Z" level=info msg="StopPodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\"" Jan 30 14:02:14.956550 containerd[2008]: time="2025-01-30T14:02:14.956375412Z" level=info msg="StopPodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\"" Jan 30 14:02:14.963920 containerd[2008]: time="2025-01-30T14:02:14.963824547Z" level=info msg="StopPodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\"" Jan 30 14:02:15.326739 kubelet[3502]: I0130 14:02:15.326155 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-whttc" podStartSLOduration=38.326124713 podStartE2EDuration="38.326124713s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:15.319596174 +0000 UTC m=+43.679825767" watchObservedRunningTime="2025-01-30 14:02:15.326124713 +0000 UTC m=+43.686354294" Jan 30 14:02:15.418230 systemd-networkd[1917]: calie8cbc2de079: Gained IPv6LL Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.178 [INFO][5275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.181 [INFO][5275] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" iface="eth0" netns="/var/run/netns/cni-b53f8ec7-8b6d-4a59-f70b-b3f353dfa182" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.185 [INFO][5275] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" iface="eth0" netns="/var/run/netns/cni-b53f8ec7-8b6d-4a59-f70b-b3f353dfa182" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.186 [INFO][5275] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" iface="eth0" netns="/var/run/netns/cni-b53f8ec7-8b6d-4a59-f70b-b3f353dfa182" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.186 [INFO][5275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.186 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.411 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.411 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.412 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.470 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.472 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.481 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:15.494762 containerd[2008]: 2025-01-30 14:02:15.484 [INFO][5275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:15.498455 containerd[2008]: time="2025-01-30T14:02:15.495475047Z" level=info msg="TearDown network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" successfully" Jan 30 14:02:15.498455 containerd[2008]: time="2025-01-30T14:02:15.495516492Z" level=info msg="StopPodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" returns successfully" Jan 30 14:02:15.511075 systemd[1]: run-netns-cni\x2db53f8ec7\x2d8b6d\x2d4a59\x2df70b\x2db3f353dfa182.mount: Deactivated successfully. Jan 30 14:02:15.541889 containerd[2008]: time="2025-01-30T14:02:15.539093278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-j5ncj,Uid:5d7b309e-de44-4e46-a503-7451528eddd3,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.269 [INFO][5281] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.269 [INFO][5281] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" iface="eth0" netns="/var/run/netns/cni-0ec02a84-ff8a-ebff-1701-368e35489a7f" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.271 [INFO][5281] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" iface="eth0" netns="/var/run/netns/cni-0ec02a84-ff8a-ebff-1701-368e35489a7f" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.275 [INFO][5281] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" iface="eth0" netns="/var/run/netns/cni-0ec02a84-ff8a-ebff-1701-368e35489a7f" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.275 [INFO][5281] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.275 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.462 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.463 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.482 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.523 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.545 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.552 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:15.575333 containerd[2008]: 2025-01-30 14:02:15.564 [INFO][5281] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:15.577478 containerd[2008]: time="2025-01-30T14:02:15.576130131Z" level=info msg="TearDown network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" successfully" Jan 30 14:02:15.577478 containerd[2008]: time="2025-01-30T14:02:15.576913739Z" level=info msg="StopPodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" returns successfully" Jan 30 14:02:15.583781 containerd[2008]: time="2025-01-30T14:02:15.583690597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-2c2xq,Uid:0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:02:15.632533 systemd[1]: run-netns-cni\x2d0ec02a84\x2dff8a\x2debff\x2d1701\x2d368e35489a7f.mount: Deactivated successfully. Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.288 [INFO][5274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.288 [INFO][5274] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" iface="eth0" netns="/var/run/netns/cni-50245632-e6a7-29cf-7dfa-f7a9ccfc5c7e" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.290 [INFO][5274] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" iface="eth0" netns="/var/run/netns/cni-50245632-e6a7-29cf-7dfa-f7a9ccfc5c7e" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.291 [INFO][5274] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" iface="eth0" netns="/var/run/netns/cni-50245632-e6a7-29cf-7dfa-f7a9ccfc5c7e" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.291 [INFO][5274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.291 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.469 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.470 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.553 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.602 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.603 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.609 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:15.641176 containerd[2008]: 2025-01-30 14:02:15.623 [INFO][5274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:15.641176 containerd[2008]: time="2025-01-30T14:02:15.637600359Z" level=info msg="TearDown network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" successfully" Jan 30 14:02:15.641176 containerd[2008]: time="2025-01-30T14:02:15.637645034Z" level=info msg="StopPodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" returns successfully" Jan 30 14:02:15.656686 containerd[2008]: time="2025-01-30T14:02:15.656609363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs7nv,Uid:d3be0557-0e75-487a-81b6-3ddc28a8a3e9,Namespace:kube-system,Attempt:1,}" Jan 30 14:02:15.664835 systemd[1]: run-netns-cni\x2d50245632\x2de6a7\x2d29cf\x2d7dfa\x2df7a9ccfc5c7e.mount: Deactivated successfully. Jan 30 14:02:16.315237 systemd-networkd[1917]: cali0f3552d4e4b: Link UP Jan 30 14:02:16.316120 systemd-networkd[1917]: cali0f3552d4e4b: Gained carrier Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:15.810 [INFO][5317] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0 calico-apiserver-7c887dfbf4- calico-apiserver 5d7b309e-de44-4e46-a503-7451528eddd3 840 0 2025-01-30 14:01:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c887dfbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-132 calico-apiserver-7c887dfbf4-j5ncj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f3552d4e4b [] []}} ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:15.811 [INFO][5317] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.075 [INFO][5354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" HandleID="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.155 [INFO][5354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" HandleID="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317f60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-132", "pod":"calico-apiserver-7c887dfbf4-j5ncj", "timestamp":"2025-01-30 14:02:16.075673477 +0000 UTC"}, Hostname:"ip-172-31-25-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.155 [INFO][5354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.159 [INFO][5354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.159 [INFO][5354] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-132' Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.173 [INFO][5354] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.194 [INFO][5354] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.220 [INFO][5354] ipam/ipam.go 489: Trying affinity for 192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.226 [INFO][5354] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.238 [INFO][5354] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.238 [INFO][5354] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.246 [INFO][5354] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.262 [INFO][5354] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.289 [INFO][5354] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.196/26] block=192.168.35.192/26 handle="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.289 [INFO][5354] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.196/26] handle="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" host="ip-172-31-25-132" Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.290 [INFO][5354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:16.389990 containerd[2008]: 2025-01-30 14:02:16.290 [INFO][5354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.196/26] IPv6=[] ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" HandleID="k8s-pod-network.6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.393412 containerd[2008]: 2025-01-30 14:02:16.304 [INFO][5317] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7b309e-de44-4e46-a503-7451528eddd3", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"", Pod:"calico-apiserver-7c887dfbf4-j5ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f3552d4e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:16.393412 containerd[2008]: 2025-01-30 14:02:16.306 [INFO][5317] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.196/32] ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.393412 containerd[2008]: 2025-01-30 14:02:16.307 [INFO][5317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f3552d4e4b ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.393412 containerd[2008]: 2025-01-30 14:02:16.317 [INFO][5317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.393412 containerd[2008]: 2025-01-30 14:02:16.318 [INFO][5317] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7b309e-de44-4e46-a503-7451528eddd3", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b", Pod:"calico-apiserver-7c887dfbf4-j5ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f3552d4e4b", MAC:"3e:0f:66:78:32:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:16.393412 containerd[2008]: 2025-01-30 14:02:16.355 [INFO][5317] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-j5ncj" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:16.443358 systemd-networkd[1917]: cali425738c1df1: Gained IPv6LL Jan 30 14:02:16.481711 systemd-networkd[1917]: calice6f1e04d89: Link UP Jan 30 14:02:16.484248 systemd-networkd[1917]: calice6f1e04d89: Gained carrier Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:15.893 [INFO][5326] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0 calico-apiserver-7c887dfbf4- calico-apiserver 0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8 841 0 2025-01-30 14:01:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c887dfbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-132 calico-apiserver-7c887dfbf4-2c2xq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calice6f1e04d89 [] []}} ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:15.893 [INFO][5326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.152 [INFO][5359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" HandleID="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.215 [INFO][5359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" HandleID="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a2120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-132", "pod":"calico-apiserver-7c887dfbf4-2c2xq", "timestamp":"2025-01-30 14:02:16.152980412 +0000 UTC"}, Hostname:"ip-172-31-25-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.215 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.291 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.291 [INFO][5359] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-132' Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.303 [INFO][5359] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.330 [INFO][5359] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.358 [INFO][5359] ipam/ipam.go 489: Trying affinity for 192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.366 [INFO][5359] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.385 [INFO][5359] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.385 [INFO][5359] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.396 [INFO][5359] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.412 [INFO][5359] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.446 [INFO][5359] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.197/26] block=192.168.35.192/26 handle="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.446 [INFO][5359] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.197/26] handle="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" host="ip-172-31-25-132" Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.446 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:16.582727 containerd[2008]: 2025-01-30 14:02:16.446 [INFO][5359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.197/26] IPv6=[] ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" HandleID="k8s-pod-network.ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.586383 containerd[2008]: 2025-01-30 14:02:16.464 [INFO][5326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"", Pod:"calico-apiserver-7c887dfbf4-2c2xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice6f1e04d89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:16.586383 containerd[2008]: 2025-01-30 14:02:16.464 [INFO][5326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.197/32] ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.586383 containerd[2008]: 2025-01-30 14:02:16.466 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice6f1e04d89 ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.586383 containerd[2008]: 2025-01-30 14:02:16.488 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.586383 containerd[2008]: 2025-01-30 14:02:16.492 [INFO][5326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a", Pod:"calico-apiserver-7c887dfbf4-2c2xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice6f1e04d89", MAC:"e6:4b:74:06:fc:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:16.586383 containerd[2008]: 2025-01-30 14:02:16.563 [INFO][5326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a" Namespace="calico-apiserver" Pod="calico-apiserver-7c887dfbf4-2c2xq" WorkloadEndpoint="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:16.620994 containerd[2008]: time="2025-01-30T14:02:16.620797987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:16.624770 containerd[2008]: time="2025-01-30T14:02:16.620898718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:16.624770 containerd[2008]: time="2025-01-30T14:02:16.620926619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:16.624770 containerd[2008]: time="2025-01-30T14:02:16.621096156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:16.704406 systemd-networkd[1917]: califa80a80456d: Link UP Jan 30 14:02:16.725248 systemd-networkd[1917]: califa80a80456d: Gained carrier Jan 30 14:02:16.750929 systemd[1]: Started cri-containerd-6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b.scope - libcontainer container 6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b. Jan 30 14:02:16.790156 containerd[2008]: time="2025-01-30T14:02:16.789641644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:16.790156 containerd[2008]: time="2025-01-30T14:02:16.789783603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:16.790156 containerd[2008]: time="2025-01-30T14:02:16.789824940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:16.790156 containerd[2008]: time="2025-01-30T14:02:16.790003757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:15.893 [INFO][5338] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0 coredns-668d6bf9bc- kube-system d3be0557-0e75-487a-81b6-3ddc28a8a3e9 842 0 2025-01-30 14:01:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-132 coredns-668d6bf9bc-xs7nv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa80a80456d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:15.894 [INFO][5338] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.198 [INFO][5360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" HandleID="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.237 [INFO][5360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" HandleID="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011abf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-132", "pod":"coredns-668d6bf9bc-xs7nv", "timestamp":"2025-01-30 14:02:16.197961143 +0000 UTC"}, Hostname:"ip-172-31-25-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.237 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.447 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.453 [INFO][5360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-132' Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.462 [INFO][5360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.496 [INFO][5360] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.520 [INFO][5360] ipam/ipam.go 489: Trying affinity for 192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.542 [INFO][5360] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.569 [INFO][5360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.569 [INFO][5360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.580 [INFO][5360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718 Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.608 [INFO][5360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.657 [INFO][5360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.198/26] block=192.168.35.192/26 handle="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.658 [INFO][5360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.198/26] handle="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" host="ip-172-31-25-132" Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.658 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:16.800245 containerd[2008]: 2025-01-30 14:02:16.658 [INFO][5360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.198/26] IPv6=[] ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" HandleID="k8s-pod-network.3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.802724 containerd[2008]: 2025-01-30 14:02:16.681 [INFO][5338] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3be0557-0e75-487a-81b6-3ddc28a8a3e9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"", Pod:"coredns-668d6bf9bc-xs7nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa80a80456d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:16.802724 containerd[2008]: 2025-01-30 14:02:16.681 [INFO][5338] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.198/32] ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.802724 containerd[2008]: 2025-01-30 14:02:16.681 [INFO][5338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa80a80456d ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.802724 containerd[2008]: 2025-01-30 14:02:16.741 [INFO][5338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.802724 containerd[2008]: 2025-01-30 14:02:16.742 [INFO][5338] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3be0557-0e75-487a-81b6-3ddc28a8a3e9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718", Pod:"coredns-668d6bf9bc-xs7nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa80a80456d", MAC:"22:e9:2a:d9:e1:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:16.802724 containerd[2008]: 2025-01-30 14:02:16.782 [INFO][5338] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs7nv" WorkloadEndpoint="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:16.897611 systemd[1]: Started cri-containerd-ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a.scope - libcontainer container ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a. Jan 30 14:02:16.994277 containerd[2008]: time="2025-01-30T14:02:16.991740588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:16.994277 containerd[2008]: time="2025-01-30T14:02:16.991874275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:16.994277 containerd[2008]: time="2025-01-30T14:02:16.991912322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:16.994277 containerd[2008]: time="2025-01-30T14:02:16.992100240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:17.089625 systemd[1]: Started cri-containerd-3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718.scope - libcontainer container 3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718. Jan 30 14:02:17.202026 systemd[1]: run-containerd-runc-k8s.io-ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a-runc.fOHegH.mount: Deactivated successfully. Jan 30 14:02:17.208761 containerd[2008]: time="2025-01-30T14:02:17.208533172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-j5ncj,Uid:5d7b309e-de44-4e46-a503-7451528eddd3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b\"" Jan 30 14:02:17.327496 containerd[2008]: time="2025-01-30T14:02:17.326124331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs7nv,Uid:d3be0557-0e75-487a-81b6-3ddc28a8a3e9,Namespace:kube-system,Attempt:1,} returns sandbox id \"3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718\"" Jan 30 14:02:17.341852 containerd[2008]: time="2025-01-30T14:02:17.341772843Z" level=info msg="CreateContainer within sandbox \"3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:02:17.361151 containerd[2008]: time="2025-01-30T14:02:17.360948191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c887dfbf4-2c2xq,Uid:0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a\"" Jan 30 14:02:17.411913 containerd[2008]: time="2025-01-30T14:02:17.411207168Z" level=info msg="CreateContainer within sandbox \"3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60df392b77f703af42964a3092d7016e83f14a3f49e920cf4288769bea058c0f\"" Jan 30 14:02:17.417340 containerd[2008]: time="2025-01-30T14:02:17.415586332Z" level=info msg="StartContainer for \"60df392b77f703af42964a3092d7016e83f14a3f49e920cf4288769bea058c0f\"" Jan 30 14:02:17.534631 systemd[1]: Started cri-containerd-60df392b77f703af42964a3092d7016e83f14a3f49e920cf4288769bea058c0f.scope - libcontainer container 60df392b77f703af42964a3092d7016e83f14a3f49e920cf4288769bea058c0f. Jan 30 14:02:17.665356 containerd[2008]: time="2025-01-30T14:02:17.663278710Z" level=info msg="StartContainer for \"60df392b77f703af42964a3092d7016e83f14a3f49e920cf4288769bea058c0f\" returns successfully" Jan 30 14:02:17.806696 containerd[2008]: time="2025-01-30T14:02:17.803627747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:17.814776 containerd[2008]: time="2025-01-30T14:02:17.814721015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:02:17.820107 containerd[2008]: time="2025-01-30T14:02:17.820047345Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:17.831702 containerd[2008]: time="2025-01-30T14:02:17.831623530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:17.836179 containerd[2008]: time="2025-01-30T14:02:17.836120137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.884770799s" Jan 30 14:02:17.836609 containerd[2008]: time="2025-01-30T14:02:17.836430576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:02:17.840997 containerd[2008]: time="2025-01-30T14:02:17.840623804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:02:17.862516 containerd[2008]: time="2025-01-30T14:02:17.862459224Z" level=info msg="CreateContainer within sandbox \"a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:02:17.895817 containerd[2008]: time="2025-01-30T14:02:17.895759270Z" level=info msg="CreateContainer within sandbox \"a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b\"" Jan 30 14:02:17.897774 containerd[2008]: time="2025-01-30T14:02:17.897712214Z" level=info msg="StartContainer for \"49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b\"" Jan 30 14:02:17.969466 systemd[1]: Started cri-containerd-49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b.scope - libcontainer container 49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b. Jan 30 14:02:18.105562 systemd-networkd[1917]: cali0f3552d4e4b: Gained IPv6LL Jan 30 14:02:18.186722 containerd[2008]: time="2025-01-30T14:02:18.186644770Z" level=info msg="StartContainer for \"49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b\" returns successfully" Jan 30 14:02:18.380861 kubelet[3502]: I0130 14:02:18.380653 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cf55bf796-kxvtl" podStartSLOduration=25.492577277 podStartE2EDuration="30.380629375s" podCreationTimestamp="2025-01-30 14:01:48 +0000 UTC" firstStartedPulling="2025-01-30 14:02:12.950279496 +0000 UTC m=+41.310509053" lastFinishedPulling="2025-01-30 14:02:17.83833151 +0000 UTC m=+46.198561151" observedRunningTime="2025-01-30 14:02:18.350770741 +0000 UTC m=+46.711000334" watchObservedRunningTime="2025-01-30 14:02:18.380629375 +0000 UTC m=+46.740858932" Jan 30 14:02:18.489593 systemd-networkd[1917]: calice6f1e04d89: Gained IPv6LL Jan 30 14:02:18.746824 systemd-networkd[1917]: califa80a80456d: Gained IPv6LL Jan 30 14:02:18.770744 systemd[1]: Started sshd@10-172.31.25.132:22-139.178.89.65:35540.service - OpenSSH per-connection server daemon (139.178.89.65:35540). Jan 30 14:02:18.821161 kubelet[3502]: I0130 14:02:18.821043 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xs7nv" podStartSLOduration=41.821020132 podStartE2EDuration="41.821020132s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:18.382125095 +0000 UTC m=+46.742354652" watchObservedRunningTime="2025-01-30 14:02:18.821020132 +0000 UTC m=+47.181249689" Jan 30 14:02:18.993831 sshd[5631]: Accepted publickey for core from 139.178.89.65 port 35540 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:18.997062 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:19.006656 systemd-logind[1994]: New session 11 of user core. Jan 30 14:02:19.014618 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:02:19.312679 sshd[5631]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:19.326847 systemd[1]: sshd@10-172.31.25.132:22-139.178.89.65:35540.service: Deactivated successfully. Jan 30 14:02:19.337008 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:02:19.341343 systemd-logind[1994]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:02:19.346554 systemd-logind[1994]: Removed session 11. Jan 30 14:02:20.279938 containerd[2008]: time="2025-01-30T14:02:20.279879131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:20.282570 containerd[2008]: time="2025-01-30T14:02:20.282527486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:02:20.284715 containerd[2008]: time="2025-01-30T14:02:20.284669909Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:20.290129 containerd[2008]: time="2025-01-30T14:02:20.289987150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:20.291647 containerd[2008]: time="2025-01-30T14:02:20.291582700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.450892263s" Jan 30 14:02:20.291857 containerd[2008]: time="2025-01-30T14:02:20.291822664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:02:20.295536 containerd[2008]: time="2025-01-30T14:02:20.294936648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:02:20.302593 containerd[2008]: time="2025-01-30T14:02:20.302487293Z" level=info msg="CreateContainer within sandbox \"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:02:20.336572 containerd[2008]: time="2025-01-30T14:02:20.336481887Z" level=info msg="CreateContainer within sandbox \"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9578d5cd1c14c24e4465042936f8d0f8128cd5fdbdf7db8a9774b93aafe53609\"" Jan 30 14:02:20.339113 containerd[2008]: time="2025-01-30T14:02:20.338357344Z" level=info msg="StartContainer for \"9578d5cd1c14c24e4465042936f8d0f8128cd5fdbdf7db8a9774b93aafe53609\"" Jan 30 14:02:20.412902 systemd[1]: run-containerd-runc-k8s.io-9578d5cd1c14c24e4465042936f8d0f8128cd5fdbdf7db8a9774b93aafe53609-runc.WlWQ9V.mount: Deactivated successfully. Jan 30 14:02:20.422653 systemd[1]: Started cri-containerd-9578d5cd1c14c24e4465042936f8d0f8128cd5fdbdf7db8a9774b93aafe53609.scope - libcontainer container 9578d5cd1c14c24e4465042936f8d0f8128cd5fdbdf7db8a9774b93aafe53609. Jan 30 14:02:20.480907 containerd[2008]: time="2025-01-30T14:02:20.480736199Z" level=info msg="StartContainer for \"9578d5cd1c14c24e4465042936f8d0f8128cd5fdbdf7db8a9774b93aafe53609\" returns successfully" Jan 30 14:02:21.378963 ntpd[1989]: Listen normally on 7 vxlan.calico 192.168.35.192:123 Jan 30 14:02:21.379099 ntpd[1989]: Listen normally on 8 vxlan.calico [fe80::640a:9bff:fef4:f65b%4]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 7 vxlan.calico 192.168.35.192:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 8 vxlan.calico [fe80::640a:9bff:fef4:f65b%4]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 9 cali0c935a718e2 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 10 calie8cbc2de079 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 11 cali425738c1df1 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 12 cali0f3552d4e4b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 13 calice6f1e04d89 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 14:02:21.379577 ntpd[1989]: 30 Jan 14:02:21 ntpd[1989]: Listen normally on 14 califa80a80456d [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 14:02:21.379182 ntpd[1989]: Listen normally on 9 cali0c935a718e2 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 14:02:21.379252 ntpd[1989]: Listen normally on 10 calie8cbc2de079 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 14:02:21.379355 ntpd[1989]: Listen normally on 11 cali425738c1df1 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 14:02:21.379428 ntpd[1989]: Listen normally on 12 cali0f3552d4e4b [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 14:02:21.379497 ntpd[1989]: Listen normally on 13 calice6f1e04d89 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 14:02:21.379564 ntpd[1989]: Listen normally on 14 califa80a80456d [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 14:02:23.103505 containerd[2008]: time="2025-01-30T14:02:23.103367701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:23.105035 containerd[2008]: time="2025-01-30T14:02:23.104960021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:02:23.106073 containerd[2008]: time="2025-01-30T14:02:23.105954070Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:23.110195 containerd[2008]: time="2025-01-30T14:02:23.110109851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:23.111950 containerd[2008]: time="2025-01-30T14:02:23.111714081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.816710355s" Jan 30 14:02:23.111950 containerd[2008]: time="2025-01-30T14:02:23.111775695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:02:23.115031 containerd[2008]: time="2025-01-30T14:02:23.114607274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:02:23.117647 containerd[2008]: time="2025-01-30T14:02:23.117404588Z" level=info msg="CreateContainer within sandbox \"6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:02:23.153436 containerd[2008]: time="2025-01-30T14:02:23.153375453Z" level=info msg="CreateContainer within sandbox \"6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2994e7c3bf2f7b4c6fa654503f83c4cda9e40d1905c09d5f2144c2fd8139397\"" Jan 30 14:02:23.157568 containerd[2008]: time="2025-01-30T14:02:23.156866318Z" level=info msg="StartContainer for \"f2994e7c3bf2f7b4c6fa654503f83c4cda9e40d1905c09d5f2144c2fd8139397\"" Jan 30 14:02:23.222669 systemd[1]: Started cri-containerd-f2994e7c3bf2f7b4c6fa654503f83c4cda9e40d1905c09d5f2144c2fd8139397.scope - libcontainer container f2994e7c3bf2f7b4c6fa654503f83c4cda9e40d1905c09d5f2144c2fd8139397. Jan 30 14:02:23.317693 containerd[2008]: time="2025-01-30T14:02:23.317567889Z" level=info msg="StartContainer for \"f2994e7c3bf2f7b4c6fa654503f83c4cda9e40d1905c09d5f2144c2fd8139397\" returns successfully" Jan 30 14:02:23.448180 containerd[2008]: time="2025-01-30T14:02:23.446697859Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:23.450538 containerd[2008]: time="2025-01-30T14:02:23.450491611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:02:23.454834 containerd[2008]: time="2025-01-30T14:02:23.454760500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 340.086509ms" Jan 30 14:02:23.455024 containerd[2008]: time="2025-01-30T14:02:23.454994918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:02:23.459407 containerd[2008]: time="2025-01-30T14:02:23.459359975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:02:23.461644 containerd[2008]: time="2025-01-30T14:02:23.461326654Z" level=info msg="CreateContainer within sandbox \"ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:02:23.489670 containerd[2008]: time="2025-01-30T14:02:23.488628730Z" level=info msg="CreateContainer within sandbox \"ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"53e26a073130b7cc8ee6380016ca3ff3155e2c65c7f09cdf7aabba80d2bdd6d3\"" Jan 30 14:02:23.494035 containerd[2008]: time="2025-01-30T14:02:23.493980921Z" level=info msg="StartContainer for \"53e26a073130b7cc8ee6380016ca3ff3155e2c65c7f09cdf7aabba80d2bdd6d3\"" Jan 30 14:02:23.504847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615759714.mount: Deactivated successfully. Jan 30 14:02:23.559774 systemd[1]: Started cri-containerd-53e26a073130b7cc8ee6380016ca3ff3155e2c65c7f09cdf7aabba80d2bdd6d3.scope - libcontainer container 53e26a073130b7cc8ee6380016ca3ff3155e2c65c7f09cdf7aabba80d2bdd6d3. Jan 30 14:02:23.647703 containerd[2008]: time="2025-01-30T14:02:23.647212924Z" level=info msg="StartContainer for \"53e26a073130b7cc8ee6380016ca3ff3155e2c65c7f09cdf7aabba80d2bdd6d3\" returns successfully" Jan 30 14:02:24.363737 systemd[1]: Started sshd@11-172.31.25.132:22-139.178.89.65:40376.service - OpenSSH per-connection server daemon (139.178.89.65:40376). Jan 30 14:02:24.393425 kubelet[3502]: I0130 14:02:24.391594 3502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:02:24.429404 kubelet[3502]: I0130 14:02:24.428858 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c887dfbf4-j5ncj" podStartSLOduration=33.531502439 podStartE2EDuration="39.428833088s" podCreationTimestamp="2025-01-30 14:01:45 +0000 UTC" firstStartedPulling="2025-01-30 14:02:17.216479271 +0000 UTC m=+45.576708816" lastFinishedPulling="2025-01-30 14:02:23.113809896 +0000 UTC m=+51.474039465" observedRunningTime="2025-01-30 14:02:23.397563484 +0000 UTC m=+51.757793077" watchObservedRunningTime="2025-01-30 14:02:24.428833088 +0000 UTC m=+52.789062633" Jan 30 14:02:24.586634 sshd[5804]: Accepted publickey for core from 139.178.89.65 port 40376 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:24.590073 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:24.607784 systemd-logind[1994]: New session 12 of user core. Jan 30 14:02:24.610631 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:02:25.036858 sshd[5804]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:25.051294 systemd[1]: sshd@11-172.31.25.132:22-139.178.89.65:40376.service: Deactivated successfully. Jan 30 14:02:25.061394 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:02:25.065323 systemd-logind[1994]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:02:25.069153 systemd-logind[1994]: Removed session 12. Jan 30 14:02:25.293597 containerd[2008]: time="2025-01-30T14:02:25.293440586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:25.297811 containerd[2008]: time="2025-01-30T14:02:25.296448941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:02:25.302919 containerd[2008]: time="2025-01-30T14:02:25.302226204Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:25.311948 containerd[2008]: time="2025-01-30T14:02:25.311770710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:25.317721 containerd[2008]: time="2025-01-30T14:02:25.317661682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.857817973s" Jan 30 14:02:25.318087 containerd[2008]: time="2025-01-30T14:02:25.317882341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:02:25.322731 containerd[2008]: time="2025-01-30T14:02:25.322677765Z" level=info msg="CreateContainer within sandbox \"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:02:25.359170 containerd[2008]: time="2025-01-30T14:02:25.359088662Z" level=info msg="CreateContainer within sandbox \"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ce874152012b54a8191f607d34e63fb00c0f9fbf7322ccf254dd5c5ba956fad8\"" Jan 30 14:02:25.363621 containerd[2008]: time="2025-01-30T14:02:25.360445100Z" level=info msg="StartContainer for \"ce874152012b54a8191f607d34e63fb00c0f9fbf7322ccf254dd5c5ba956fad8\"" Jan 30 14:02:25.418391 kubelet[3502]: I0130 14:02:25.418339 3502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:02:25.461329 systemd[1]: Started cri-containerd-ce874152012b54a8191f607d34e63fb00c0f9fbf7322ccf254dd5c5ba956fad8.scope - libcontainer container ce874152012b54a8191f607d34e63fb00c0f9fbf7322ccf254dd5c5ba956fad8. Jan 30 14:02:25.576692 containerd[2008]: time="2025-01-30T14:02:25.576439284Z" level=info msg="StartContainer for \"ce874152012b54a8191f607d34e63fb00c0f9fbf7322ccf254dd5c5ba956fad8\" returns successfully" Jan 30 14:02:26.094855 kubelet[3502]: I0130 14:02:26.094807 3502 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:02:26.094855 kubelet[3502]: I0130 14:02:26.094856 3502 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:02:26.454991 kubelet[3502]: I0130 14:02:26.454885 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c887dfbf4-2c2xq" podStartSLOduration=35.366972372 podStartE2EDuration="41.454862723s" podCreationTimestamp="2025-01-30 14:01:45 +0000 UTC" firstStartedPulling="2025-01-30 14:02:17.368457163 +0000 UTC m=+45.728686708" lastFinishedPulling="2025-01-30 14:02:23.456347514 +0000 UTC m=+51.816577059" observedRunningTime="2025-01-30 14:02:24.430827657 +0000 UTC m=+52.791057238" watchObservedRunningTime="2025-01-30 14:02:26.454862723 +0000 UTC m=+54.815092280" Jan 30 14:02:26.456079 kubelet[3502]: I0130 14:02:26.455997 3502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qdvb4" podStartSLOduration=27.985702949 podStartE2EDuration="39.455974947s" podCreationTimestamp="2025-01-30 14:01:47 +0000 UTC" firstStartedPulling="2025-01-30 14:02:13.849555111 +0000 UTC m=+42.209784668" lastFinishedPulling="2025-01-30 14:02:25.319827109 +0000 UTC m=+53.680056666" observedRunningTime="2025-01-30 14:02:26.454088708 +0000 UTC m=+54.814318289" watchObservedRunningTime="2025-01-30 14:02:26.455974947 +0000 UTC m=+54.816204696" Jan 30 14:02:30.084866 systemd[1]: Started sshd@12-172.31.25.132:22-139.178.89.65:40392.service - OpenSSH per-connection server daemon (139.178.89.65:40392). Jan 30 14:02:30.277647 sshd[5864]: Accepted publickey for core from 139.178.89.65 port 40392 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:30.280888 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:30.289590 systemd-logind[1994]: New session 13 of user core. Jan 30 14:02:30.298582 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:02:30.553770 sshd[5864]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:30.560265 systemd[1]: sshd@12-172.31.25.132:22-139.178.89.65:40392.service: Deactivated successfully. Jan 30 14:02:30.564014 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:02:30.566383 systemd-logind[1994]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:02:30.568991 systemd-logind[1994]: Removed session 13. Jan 30 14:02:30.595920 systemd[1]: Started sshd@13-172.31.25.132:22-139.178.89.65:40408.service - OpenSSH per-connection server daemon (139.178.89.65:40408). Jan 30 14:02:30.778746 sshd[5878]: Accepted publickey for core from 139.178.89.65 port 40408 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:30.781464 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:30.789404 systemd-logind[1994]: New session 14 of user core. Jan 30 14:02:30.799580 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:02:31.112039 sshd[5878]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:31.121188 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:02:31.122955 systemd[1]: sshd@13-172.31.25.132:22-139.178.89.65:40408.service: Deactivated successfully. Jan 30 14:02:31.138881 systemd-logind[1994]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:02:31.162834 systemd[1]: Started sshd@14-172.31.25.132:22-139.178.89.65:44346.service - OpenSSH per-connection server daemon (139.178.89.65:44346). Jan 30 14:02:31.165528 systemd-logind[1994]: Removed session 14. Jan 30 14:02:31.344151 sshd[5889]: Accepted publickey for core from 139.178.89.65 port 44346 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:31.346856 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:31.356097 systemd-logind[1994]: New session 15 of user core. Jan 30 14:02:31.361655 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:02:31.600799 sshd[5889]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:31.607994 systemd[1]: sshd@14-172.31.25.132:22-139.178.89.65:44346.service: Deactivated successfully. Jan 30 14:02:31.611271 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:02:31.613486 systemd-logind[1994]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:02:31.615505 systemd-logind[1994]: Removed session 15. Jan 30 14:02:31.938457 containerd[2008]: time="2025-01-30T14:02:31.938330352Z" level=info msg="StopPodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\"" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.013 [WARNING][5919] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b56eeb33-9e79-4757-a325-1b7299a49fcc", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7", Pod:"csi-node-driver-qdvb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cbc2de079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.013 [INFO][5919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.013 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" iface="eth0" netns="" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.013 [INFO][5919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.013 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.051 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.052 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.052 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.064 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.064 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.067 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.072039 containerd[2008]: 2025-01-30 14:02:32.069 [INFO][5919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.073492 containerd[2008]: time="2025-01-30T14:02:32.072141438Z" level=info msg="TearDown network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" successfully" Jan 30 14:02:32.073492 containerd[2008]: time="2025-01-30T14:02:32.072218864Z" level=info msg="StopPodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" returns successfully" Jan 30 14:02:32.074471 containerd[2008]: time="2025-01-30T14:02:32.073925793Z" level=info msg="RemovePodSandbox for \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\"" Jan 30 14:02:32.074471 containerd[2008]: time="2025-01-30T14:02:32.073986448Z" level=info msg="Forcibly stopping sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\"" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.151 [WARNING][5945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b56eeb33-9e79-4757-a325-1b7299a49fcc", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"740df4ab57572414d76928460b98dd1759fdb0b3fb7231bbb781b093dd6cd3a7", Pod:"csi-node-driver-qdvb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8cbc2de079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.151 [INFO][5945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.151 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" iface="eth0" netns="" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.151 [INFO][5945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.151 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.186 [INFO][5951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.186 [INFO][5951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.186 [INFO][5951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.199 [WARNING][5951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.199 [INFO][5951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" HandleID="k8s-pod-network.8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Workload="ip--172--31--25--132-k8s-csi--node--driver--qdvb4-eth0" Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.202 [INFO][5951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.207484 containerd[2008]: 2025-01-30 14:02:32.204 [INFO][5945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a" Jan 30 14:02:32.207484 containerd[2008]: time="2025-01-30T14:02:32.207444124Z" level=info msg="TearDown network for sandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" successfully" Jan 30 14:02:32.213579 containerd[2008]: time="2025-01-30T14:02:32.213499314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:02:32.213735 containerd[2008]: time="2025-01-30T14:02:32.213609805Z" level=info msg="RemovePodSandbox \"8376fa91e18a80a7274ef1b55d2b206fa85f7e86613d77cac765c9687aa41d2a\" returns successfully" Jan 30 14:02:32.215052 containerd[2008]: time="2025-01-30T14:02:32.214498345Z" level=info msg="StopPodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\"" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.278 [WARNING][5969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3be0557-0e75-487a-81b6-3ddc28a8a3e9", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718", Pod:"coredns-668d6bf9bc-xs7nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa80a80456d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.278 [INFO][5969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.278 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" iface="eth0" netns="" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.279 [INFO][5969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.279 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.319 [INFO][5976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.319 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.319 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.330 [WARNING][5976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.330 [INFO][5976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.333 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.337885 containerd[2008]: 2025-01-30 14:02:32.335 [INFO][5969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.338766 containerd[2008]: time="2025-01-30T14:02:32.338508560Z" level=info msg="TearDown network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" successfully" Jan 30 14:02:32.338766 containerd[2008]: time="2025-01-30T14:02:32.338548840Z" level=info msg="StopPodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" returns successfully" Jan 30 14:02:32.339359 containerd[2008]: time="2025-01-30T14:02:32.339236964Z" level=info msg="RemovePodSandbox for \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\"" Jan 30 14:02:32.339587 containerd[2008]: time="2025-01-30T14:02:32.339295890Z" level=info msg="Forcibly stopping sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\"" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.410 [WARNING][5994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3be0557-0e75-487a-81b6-3ddc28a8a3e9", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"3cbce58e59c3a62fee31056ba6c31dc367e839418d494d52c92b6437420d9718", Pod:"coredns-668d6bf9bc-xs7nv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa80a80456d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.410 [INFO][5994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.411 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" iface="eth0" netns="" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.411 [INFO][5994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.411 [INFO][5994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.480 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.480 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.480 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.493 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.493 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" HandleID="k8s-pod-network.6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--xs7nv-eth0" Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.497 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.501633 containerd[2008]: 2025-01-30 14:02:32.499 [INFO][5994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6" Jan 30 14:02:32.502987 containerd[2008]: time="2025-01-30T14:02:32.502700432Z" level=info msg="TearDown network for sandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" successfully" Jan 30 14:02:32.510544 containerd[2008]: time="2025-01-30T14:02:32.510483850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:02:32.510926 containerd[2008]: time="2025-01-30T14:02:32.510768248Z" level=info msg="RemovePodSandbox \"6400a6307c56ca5966fd9eba24e5263f93f7e90ea0bf9dca06026033b3ee94f6\" returns successfully" Jan 30 14:02:32.511415 containerd[2008]: time="2025-01-30T14:02:32.511379354Z" level=info msg="StopPodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\"" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.575 [WARNING][6024] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"96ccd58a-bb33-448c-b933-754229aff909", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120", Pod:"coredns-668d6bf9bc-whttc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali425738c1df1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.575 [INFO][6024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.576 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" iface="eth0" netns="" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.576 [INFO][6024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.576 [INFO][6024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.618 [INFO][6030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.618 [INFO][6030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.618 [INFO][6030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.635 [WARNING][6030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.636 [INFO][6030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.644 [INFO][6030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.656170 containerd[2008]: 2025-01-30 14:02:32.649 [INFO][6024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.658877 containerd[2008]: time="2025-01-30T14:02:32.656206496Z" level=info msg="TearDown network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" successfully" Jan 30 14:02:32.658877 containerd[2008]: time="2025-01-30T14:02:32.656244387Z" level=info msg="StopPodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" returns successfully" Jan 30 14:02:32.658877 containerd[2008]: time="2025-01-30T14:02:32.657974548Z" level=info msg="RemovePodSandbox for \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\"" Jan 30 14:02:32.658877 containerd[2008]: time="2025-01-30T14:02:32.658024012Z" level=info msg="Forcibly stopping sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\"" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.741 [WARNING][6063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"96ccd58a-bb33-448c-b933-754229aff909", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"427c7a8605db6cd6aec55cae3ef68e3e70233969653b8d3e86197a88246d9120", Pod:"coredns-668d6bf9bc-whttc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali425738c1df1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.741 [INFO][6063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.741 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" iface="eth0" netns="" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.741 [INFO][6063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.741 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.784 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.784 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.784 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.796 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.796 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" HandleID="k8s-pod-network.4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Workload="ip--172--31--25--132-k8s-coredns--668d6bf9bc--whttc-eth0" Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.800 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.805232 containerd[2008]: 2025-01-30 14:02:32.802 [INFO][6063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6" Jan 30 14:02:32.807151 containerd[2008]: time="2025-01-30T14:02:32.806134748Z" level=info msg="TearDown network for sandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" successfully" Jan 30 14:02:32.812617 containerd[2008]: time="2025-01-30T14:02:32.812551955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:02:32.812617 containerd[2008]: time="2025-01-30T14:02:32.812654282Z" level=info msg="RemovePodSandbox \"4b4d0f3ccd5260397e2078b652d2c845a2fa35c21e3585251a0ca769ebad1fa6\" returns successfully" Jan 30 14:02:32.813484 containerd[2008]: time="2025-01-30T14:02:32.813394644Z" level=info msg="StopPodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\"" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.875 [WARNING][6091] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7b309e-de44-4e46-a503-7451528eddd3", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b", Pod:"calico-apiserver-7c887dfbf4-j5ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f3552d4e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.875 [INFO][6091] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.875 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" iface="eth0" netns="" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.876 [INFO][6091] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.876 [INFO][6091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.913 [INFO][6098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.913 [INFO][6098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.913 [INFO][6098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.928 [WARNING][6098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.928 [INFO][6098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.932 [INFO][6098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:32.936953 containerd[2008]: 2025-01-30 14:02:32.934 [INFO][6091] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:32.936953 containerd[2008]: time="2025-01-30T14:02:32.936884051Z" level=info msg="TearDown network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" successfully" Jan 30 14:02:32.937871 containerd[2008]: time="2025-01-30T14:02:32.936921017Z" level=info msg="StopPodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" returns successfully" Jan 30 14:02:32.938710 containerd[2008]: time="2025-01-30T14:02:32.938570966Z" level=info msg="RemovePodSandbox for \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\"" Jan 30 14:02:32.938710 containerd[2008]: time="2025-01-30T14:02:32.938643758Z" level=info msg="Forcibly stopping sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\"" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.010 [WARNING][6116] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7b309e-de44-4e46-a503-7451528eddd3", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"6a3c58a9e04e91ea2d36b05f45b2651de4fc501ffc0ad241893e3441d23f0b8b", Pod:"calico-apiserver-7c887dfbf4-j5ncj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f3552d4e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.011 [INFO][6116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.011 [INFO][6116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" iface="eth0" netns="" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.011 [INFO][6116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.011 [INFO][6116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.060 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.060 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.060 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.080 [WARNING][6122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.080 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" HandleID="k8s-pod-network.af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--j5ncj-eth0" Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.086 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:33.094817 containerd[2008]: 2025-01-30 14:02:33.091 [INFO][6116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb" Jan 30 14:02:33.094817 containerd[2008]: time="2025-01-30T14:02:33.094798982Z" level=info msg="TearDown network for sandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" successfully" Jan 30 14:02:33.102463 containerd[2008]: time="2025-01-30T14:02:33.102354046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:02:33.102748 containerd[2008]: time="2025-01-30T14:02:33.102480685Z" level=info msg="RemovePodSandbox \"af8a2a77cef435f0700bce861baa99b93549d6e87347b78206e07a8930f3bbbb\" returns successfully" Jan 30 14:02:33.103324 containerd[2008]: time="2025-01-30T14:02:33.103091838Z" level=info msg="StopPodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\"" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.222 [WARNING][6141] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0", GenerateName:"calico-kube-controllers-5cf55bf796-", Namespace:"calico-system", SelfLink:"", UID:"172e4ddf-6abc-4051-bf69-e492ba18c815", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cf55bf796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2", Pod:"calico-kube-controllers-5cf55bf796-kxvtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c935a718e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.223 [INFO][6141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.223 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" iface="eth0" netns="" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.223 [INFO][6141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.223 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.294 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.296 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.296 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.311 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.311 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.319 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:33.326627 containerd[2008]: 2025-01-30 14:02:33.323 [INFO][6141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.329365 containerd[2008]: time="2025-01-30T14:02:33.327729070Z" level=info msg="TearDown network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" successfully" Jan 30 14:02:33.329365 containerd[2008]: time="2025-01-30T14:02:33.327797360Z" level=info msg="StopPodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" returns successfully" Jan 30 14:02:33.329365 containerd[2008]: time="2025-01-30T14:02:33.328720538Z" level=info msg="RemovePodSandbox for \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\"" Jan 30 14:02:33.329365 containerd[2008]: time="2025-01-30T14:02:33.328808770Z" level=info msg="Forcibly stopping sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\"" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.434 [WARNING][6168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0", GenerateName:"calico-kube-controllers-5cf55bf796-", Namespace:"calico-system", SelfLink:"", UID:"172e4ddf-6abc-4051-bf69-e492ba18c815", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cf55bf796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"a8ca2530e4297efc2612b6569fd9f4a42ad19752fb6e973054c6284e29d02ff2", Pod:"calico-kube-controllers-5cf55bf796-kxvtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c935a718e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.435 [INFO][6168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.435 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" iface="eth0" netns="" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.435 [INFO][6168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.435 [INFO][6168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.512 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.513 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.513 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.529 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.529 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" HandleID="k8s-pod-network.555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Workload="ip--172--31--25--132-k8s-calico--kube--controllers--5cf55bf796--kxvtl-eth0" Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.531 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:33.536713 containerd[2008]: 2025-01-30 14:02:33.534 [INFO][6168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d" Jan 30 14:02:33.537542 containerd[2008]: time="2025-01-30T14:02:33.536801052Z" level=info msg="TearDown network for sandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" successfully" Jan 30 14:02:33.544414 containerd[2008]: time="2025-01-30T14:02:33.544289974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:02:33.544572 containerd[2008]: time="2025-01-30T14:02:33.544435175Z" level=info msg="RemovePodSandbox \"555d69efb812584d3025b02f1b65b47ad8aeecdbb922619d43c3b4f6258f881d\" returns successfully" Jan 30 14:02:33.545202 containerd[2008]: time="2025-01-30T14:02:33.545047133Z" level=info msg="StopPodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\"" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.608 [WARNING][6193] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a", Pod:"calico-apiserver-7c887dfbf4-2c2xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice6f1e04d89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.609 [INFO][6193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.609 [INFO][6193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" iface="eth0" netns="" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.609 [INFO][6193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.609 [INFO][6193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.661 [INFO][6200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.663 [INFO][6200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.663 [INFO][6200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.685 [WARNING][6200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.685 [INFO][6200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.688 [INFO][6200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:33.692277 containerd[2008]: 2025-01-30 14:02:33.689 [INFO][6193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.693650 containerd[2008]: time="2025-01-30T14:02:33.692454773Z" level=info msg="TearDown network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" successfully" Jan 30 14:02:33.693650 containerd[2008]: time="2025-01-30T14:02:33.692520722Z" level=info msg="StopPodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" returns successfully" Jan 30 14:02:33.693650 containerd[2008]: time="2025-01-30T14:02:33.693227227Z" level=info msg="RemovePodSandbox for \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\"" Jan 30 14:02:33.693650 containerd[2008]: time="2025-01-30T14:02:33.693289226Z" level=info msg="Forcibly stopping sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\"" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.759 [WARNING][6218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0", GenerateName:"calico-apiserver-7c887dfbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a1653fd-bac3-4c9d-83aa-3ff2020f5cf8", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c887dfbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-132", ContainerID:"ad18d36aedc862b62569136ef27338152a2d12f8e97a3443faa283e529e58d1a", Pod:"calico-apiserver-7c887dfbf4-2c2xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice6f1e04d89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.759 [INFO][6218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.759 [INFO][6218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" iface="eth0" netns="" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.759 [INFO][6218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.760 [INFO][6218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.797 [INFO][6224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.797 [INFO][6224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.797 [INFO][6224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.813 [WARNING][6224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.813 [INFO][6224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" HandleID="k8s-pod-network.ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Workload="ip--172--31--25--132-k8s-calico--apiserver--7c887dfbf4--2c2xq-eth0" Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.818 [INFO][6224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:02:33.824158 containerd[2008]: 2025-01-30 14:02:33.821 [INFO][6218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e" Jan 30 14:02:33.826385 containerd[2008]: time="2025-01-30T14:02:33.824136317Z" level=info msg="TearDown network for sandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" successfully" Jan 30 14:02:33.831523 containerd[2008]: time="2025-01-30T14:02:33.831450913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:02:33.831757 containerd[2008]: time="2025-01-30T14:02:33.831726367Z" level=info msg="RemovePodSandbox \"ab740fa45680a371302ad246fca91098ab962329a83eed17371dd2d22b3e248e\" returns successfully" Jan 30 14:02:36.651525 systemd[1]: Started sshd@15-172.31.25.132:22-139.178.89.65:44354.service - OpenSSH per-connection server daemon (139.178.89.65:44354). Jan 30 14:02:36.832858 sshd[6231]: Accepted publickey for core from 139.178.89.65 port 44354 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:36.835513 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:36.844650 systemd-logind[1994]: New session 16 of user core. Jan 30 14:02:36.853593 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:02:37.097668 sshd[6231]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:37.104971 systemd[1]: sshd@15-172.31.25.132:22-139.178.89.65:44354.service: Deactivated successfully. Jan 30 14:02:37.108965 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:02:37.111379 systemd-logind[1994]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:02:37.113909 systemd-logind[1994]: Removed session 16. Jan 30 14:02:42.139842 systemd[1]: Started sshd@16-172.31.25.132:22-139.178.89.65:37240.service - OpenSSH per-connection server daemon (139.178.89.65:37240). Jan 30 14:02:42.327724 sshd[6270]: Accepted publickey for core from 139.178.89.65 port 37240 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:42.330956 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:42.340424 systemd-logind[1994]: New session 17 of user core. Jan 30 14:02:42.347577 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:02:42.631563 sshd[6270]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:42.642545 systemd[1]: sshd@16-172.31.25.132:22-139.178.89.65:37240.service: Deactivated successfully. Jan 30 14:02:42.647077 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:02:42.649771 systemd-logind[1994]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:02:42.652488 systemd-logind[1994]: Removed session 17. Jan 30 14:02:43.580210 kubelet[3502]: I0130 14:02:43.579867 3502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:02:44.554341 kubelet[3502]: I0130 14:02:44.553509 3502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:02:47.678894 systemd[1]: Started sshd@17-172.31.25.132:22-139.178.89.65:37248.service - OpenSSH per-connection server daemon (139.178.89.65:37248). Jan 30 14:02:47.883720 sshd[6289]: Accepted publickey for core from 139.178.89.65 port 37248 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:47.887798 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:47.897780 systemd-logind[1994]: New session 18 of user core. Jan 30 14:02:47.906714 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:02:48.199200 sshd[6289]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:48.207449 systemd[1]: sshd@17-172.31.25.132:22-139.178.89.65:37248.service: Deactivated successfully. Jan 30 14:02:48.207925 systemd-logind[1994]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:02:48.216749 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:02:48.226771 systemd-logind[1994]: Removed session 18. Jan 30 14:02:48.824985 systemd[1]: run-containerd-runc-k8s.io-49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b-runc.KZ1wUs.mount: Deactivated successfully. Jan 30 14:02:53.242854 systemd[1]: Started sshd@18-172.31.25.132:22-139.178.89.65:39510.service - OpenSSH per-connection server daemon (139.178.89.65:39510). Jan 30 14:02:53.421505 sshd[6328]: Accepted publickey for core from 139.178.89.65 port 39510 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:53.424541 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:53.432213 systemd-logind[1994]: New session 19 of user core. Jan 30 14:02:53.443567 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:02:53.711949 sshd[6328]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:53.718566 systemd[1]: sshd@18-172.31.25.132:22-139.178.89.65:39510.service: Deactivated successfully. Jan 30 14:02:53.723396 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:02:53.728620 systemd-logind[1994]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:02:53.730490 systemd-logind[1994]: Removed session 19. Jan 30 14:02:58.758943 systemd[1]: Started sshd@19-172.31.25.132:22-139.178.89.65:39516.service - OpenSSH per-connection server daemon (139.178.89.65:39516). Jan 30 14:02:58.952154 sshd[6342]: Accepted publickey for core from 139.178.89.65 port 39516 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:58.954837 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:58.963432 systemd-logind[1994]: New session 20 of user core. Jan 30 14:02:58.972576 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:02:59.211831 sshd[6342]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:59.218075 systemd[1]: sshd@19-172.31.25.132:22-139.178.89.65:39516.service: Deactivated successfully. Jan 30 14:02:59.222046 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:02:59.223564 systemd-logind[1994]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:02:59.225413 systemd-logind[1994]: Removed session 20. Jan 30 14:02:59.256753 systemd[1]: Started sshd@20-172.31.25.132:22-139.178.89.65:39520.service - OpenSSH per-connection server daemon (139.178.89.65:39520). Jan 30 14:02:59.422371 sshd[6355]: Accepted publickey for core from 139.178.89.65 port 39520 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:59.425035 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:59.433097 systemd-logind[1994]: New session 21 of user core. Jan 30 14:02:59.438574 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:02:59.923267 sshd[6355]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:59.929472 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:02:59.932097 systemd[1]: sshd@20-172.31.25.132:22-139.178.89.65:39520.service: Deactivated successfully. Jan 30 14:02:59.937446 systemd-logind[1994]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:02:59.939480 systemd-logind[1994]: Removed session 21. Jan 30 14:02:59.963839 systemd[1]: Started sshd@21-172.31.25.132:22-139.178.89.65:39530.service - OpenSSH per-connection server daemon (139.178.89.65:39530). Jan 30 14:03:00.153658 sshd[6366]: Accepted publickey for core from 139.178.89.65 port 39530 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:00.156445 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:00.166137 systemd-logind[1994]: New session 22 of user core. Jan 30 14:03:00.175607 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:03:01.401861 sshd[6366]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:01.410824 systemd[1]: sshd@21-172.31.25.132:22-139.178.89.65:39530.service: Deactivated successfully. Jan 30 14:03:01.418679 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:03:01.424833 systemd-logind[1994]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:03:01.457008 systemd[1]: Started sshd@22-172.31.25.132:22-139.178.89.65:51992.service - OpenSSH per-connection server daemon (139.178.89.65:51992). Jan 30 14:03:01.460930 systemd-logind[1994]: Removed session 22. Jan 30 14:03:01.635943 sshd[6386]: Accepted publickey for core from 139.178.89.65 port 51992 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:01.640001 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:01.658248 systemd-logind[1994]: New session 23 of user core. Jan 30 14:03:01.660882 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:03:02.184357 sshd[6386]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:02.192170 systemd[1]: sshd@22-172.31.25.132:22-139.178.89.65:51992.service: Deactivated successfully. Jan 30 14:03:02.195766 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:03:02.198791 systemd-logind[1994]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:03:02.201457 systemd-logind[1994]: Removed session 23. Jan 30 14:03:02.225831 systemd[1]: Started sshd@23-172.31.25.132:22-139.178.89.65:51996.service - OpenSSH per-connection server daemon (139.178.89.65:51996). Jan 30 14:03:02.398492 sshd[6399]: Accepted publickey for core from 139.178.89.65 port 51996 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:02.401252 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:02.410295 systemd-logind[1994]: New session 24 of user core. Jan 30 14:03:02.417595 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:03:02.689979 sshd[6399]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:02.696783 systemd[1]: sshd@23-172.31.25.132:22-139.178.89.65:51996.service: Deactivated successfully. Jan 30 14:03:02.701612 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:03:02.704668 systemd-logind[1994]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:03:02.707022 systemd-logind[1994]: Removed session 24. Jan 30 14:03:07.734848 systemd[1]: Started sshd@24-172.31.25.132:22-139.178.89.65:52002.service - OpenSSH per-connection server daemon (139.178.89.65:52002). Jan 30 14:03:07.922668 sshd[6412]: Accepted publickey for core from 139.178.89.65 port 52002 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:07.926036 sshd[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:07.935587 systemd-logind[1994]: New session 25 of user core. Jan 30 14:03:07.943567 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:03:08.188948 sshd[6412]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:08.194642 systemd[1]: sshd@24-172.31.25.132:22-139.178.89.65:52002.service: Deactivated successfully. Jan 30 14:03:08.195571 systemd-logind[1994]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:03:08.198953 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:03:08.203579 systemd-logind[1994]: Removed session 25. Jan 30 14:03:13.238905 systemd[1]: Started sshd@25-172.31.25.132:22-139.178.89.65:49852.service - OpenSSH per-connection server daemon (139.178.89.65:49852). Jan 30 14:03:13.437719 sshd[6453]: Accepted publickey for core from 139.178.89.65 port 49852 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:13.441841 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:13.454625 systemd-logind[1994]: New session 26 of user core. Jan 30 14:03:13.466573 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:03:13.787710 sshd[6453]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:13.797618 systemd[1]: sshd@25-172.31.25.132:22-139.178.89.65:49852.service: Deactivated successfully. Jan 30 14:03:13.805636 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:03:13.811015 systemd-logind[1994]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:03:13.815979 systemd-logind[1994]: Removed session 26. Jan 30 14:03:18.841177 systemd[1]: Started sshd@26-172.31.25.132:22-139.178.89.65:49856.service - OpenSSH per-connection server daemon (139.178.89.65:49856). Jan 30 14:03:19.027896 sshd[6482]: Accepted publickey for core from 139.178.89.65 port 49856 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:19.030650 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:19.038885 systemd-logind[1994]: New session 27 of user core. Jan 30 14:03:19.047743 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:03:19.292551 sshd[6482]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:19.299853 systemd[1]: sshd@26-172.31.25.132:22-139.178.89.65:49856.service: Deactivated successfully. Jan 30 14:03:19.303178 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:03:19.305541 systemd-logind[1994]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:03:19.307902 systemd-logind[1994]: Removed session 27. Jan 30 14:03:24.339868 systemd[1]: Started sshd@27-172.31.25.132:22-139.178.89.65:38548.service - OpenSSH per-connection server daemon (139.178.89.65:38548). Jan 30 14:03:24.528477 sshd[6498]: Accepted publickey for core from 139.178.89.65 port 38548 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:24.531099 sshd[6498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:24.540275 systemd-logind[1994]: New session 28 of user core. Jan 30 14:03:24.546581 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:03:24.799438 sshd[6498]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:24.804684 systemd[1]: sshd@27-172.31.25.132:22-139.178.89.65:38548.service: Deactivated successfully. Jan 30 14:03:24.809226 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:03:24.814688 systemd-logind[1994]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:03:24.816748 systemd-logind[1994]: Removed session 28. Jan 30 14:03:29.838842 systemd[1]: Started sshd@28-172.31.25.132:22-139.178.89.65:38562.service - OpenSSH per-connection server daemon (139.178.89.65:38562). Jan 30 14:03:30.023828 sshd[6511]: Accepted publickey for core from 139.178.89.65 port 38562 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:30.027051 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:30.034900 systemd-logind[1994]: New session 29 of user core. Jan 30 14:03:30.041626 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 14:03:30.292974 sshd[6511]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:30.300119 systemd[1]: sshd@28-172.31.25.132:22-139.178.89.65:38562.service: Deactivated successfully. Jan 30 14:03:30.305272 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 14:03:30.307089 systemd-logind[1994]: Session 29 logged out. Waiting for processes to exit. Jan 30 14:03:30.310743 systemd-logind[1994]: Removed session 29. Jan 30 14:03:32.634255 systemd[1]: run-containerd-runc-k8s.io-49915b6c73f4eef7f3d5f1db176d706a04952946ba43ad752a3011fe3255c77b-runc.FQ53X5.mount: Deactivated successfully. Jan 30 14:03:35.333823 systemd[1]: Started sshd@29-172.31.25.132:22-139.178.89.65:58992.service - OpenSSH per-connection server daemon (139.178.89.65:58992). Jan 30 14:03:35.515721 sshd[6552]: Accepted publickey for core from 139.178.89.65 port 58992 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:35.518410 sshd[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:35.526404 systemd-logind[1994]: New session 30 of user core. Jan 30 14:03:35.535559 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 14:03:35.780669 sshd[6552]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:35.786430 systemd[1]: sshd@29-172.31.25.132:22-139.178.89.65:58992.service: Deactivated successfully. Jan 30 14:03:35.791625 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 14:03:35.793102 systemd-logind[1994]: Session 30 logged out. Waiting for processes to exit. Jan 30 14:03:35.795381 systemd-logind[1994]: Removed session 30. Jan 30 14:03:50.263248 systemd[1]: cri-containerd-5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490.scope: Deactivated successfully. Jan 30 14:03:50.265562 systemd[1]: cri-containerd-5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490.scope: Consumed 7.148s CPU time. Jan 30 14:03:50.301956 containerd[2008]: time="2025-01-30T14:03:50.301698285Z" level=info msg="shim disconnected" id=5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490 namespace=k8s.io Jan 30 14:03:50.301956 containerd[2008]: time="2025-01-30T14:03:50.301927623Z" level=warning msg="cleaning up after shim disconnected" id=5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490 namespace=k8s.io Jan 30 14:03:50.302625 containerd[2008]: time="2025-01-30T14:03:50.301983895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:50.309955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490-rootfs.mount: Deactivated successfully. Jan 30 14:03:50.684320 systemd[1]: cri-containerd-44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345.scope: Deactivated successfully. Jan 30 14:03:50.685614 systemd[1]: cri-containerd-44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345.scope: Consumed 4.834s CPU time, 17.5M memory peak, 0B memory swap peak. Jan 30 14:03:50.715744 kubelet[3502]: I0130 14:03:50.715564 3502 scope.go:117] "RemoveContainer" containerID="5d6bb0131e9674a7912724431b2f19b9617662eb40909c9c6edfaaf4e596b490" Jan 30 14:03:50.720494 containerd[2008]: time="2025-01-30T14:03:50.720422689Z" level=info msg="CreateContainer within sandbox \"da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 14:03:50.749560 containerd[2008]: time="2025-01-30T14:03:50.749468997Z" level=info msg="CreateContainer within sandbox \"da750b69db982ee71071ed5d2bb7852e5316e40f586c3d4f6472cdddc1ef8019\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b2c44702d573551b660744baf36596d6b5bcf3be4b97e2673c1370b41fa886a6\"" Jan 30 14:03:50.752176 containerd[2008]: time="2025-01-30T14:03:50.750719146Z" level=info msg="StartContainer for \"b2c44702d573551b660744baf36596d6b5bcf3be4b97e2673c1370b41fa886a6\"" Jan 30 14:03:50.768895 containerd[2008]: time="2025-01-30T14:03:50.768643072Z" level=info msg="shim disconnected" id=44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345 namespace=k8s.io Jan 30 14:03:50.768886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345-rootfs.mount: Deactivated successfully. Jan 30 14:03:50.769545 containerd[2008]: time="2025-01-30T14:03:50.769287278Z" level=warning msg="cleaning up after shim disconnected" id=44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345 namespace=k8s.io Jan 30 14:03:50.769834 containerd[2008]: time="2025-01-30T14:03:50.769798745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:50.826609 systemd[1]: Started cri-containerd-b2c44702d573551b660744baf36596d6b5bcf3be4b97e2673c1370b41fa886a6.scope - libcontainer container b2c44702d573551b660744baf36596d6b5bcf3be4b97e2673c1370b41fa886a6. Jan 30 14:03:50.879796 containerd[2008]: time="2025-01-30T14:03:50.879709834Z" level=info msg="StartContainer for \"b2c44702d573551b660744baf36596d6b5bcf3be4b97e2673c1370b41fa886a6\" returns successfully" Jan 30 14:03:51.720330 kubelet[3502]: I0130 14:03:51.719884 3502 scope.go:117] "RemoveContainer" containerID="44bf7088249c3cbee782523eb009f8e12b10660e57a64f40d38deba8a5b67345" Jan 30 14:03:51.726141 containerd[2008]: time="2025-01-30T14:03:51.725841862Z" level=info msg="CreateContainer within sandbox \"eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 14:03:51.761073 containerd[2008]: time="2025-01-30T14:03:51.760990940Z" level=info msg="CreateContainer within sandbox \"eb7fdf6b069a859235b376305445e629706ac71d5f0d07ae28ab8bba3870d1f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bb14a9dba60e3f8d3470e20767a9d7664e0b75518136d2fd0463fe569a9157e4\"" Jan 30 14:03:51.762124 containerd[2008]: time="2025-01-30T14:03:51.762001197Z" level=info msg="StartContainer for \"bb14a9dba60e3f8d3470e20767a9d7664e0b75518136d2fd0463fe569a9157e4\"" Jan 30 14:03:51.829643 systemd[1]: Started cri-containerd-bb14a9dba60e3f8d3470e20767a9d7664e0b75518136d2fd0463fe569a9157e4.scope - libcontainer container bb14a9dba60e3f8d3470e20767a9d7664e0b75518136d2fd0463fe569a9157e4. Jan 30 14:03:51.897542 containerd[2008]: time="2025-01-30T14:03:51.897426165Z" level=info msg="StartContainer for \"bb14a9dba60e3f8d3470e20767a9d7664e0b75518136d2fd0463fe569a9157e4\" returns successfully" Jan 30 14:03:54.331691 kubelet[3502]: E0130 14:03:54.330808 3502 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-132?timeout=10s\": context deadline exceeded" Jan 30 14:03:55.743196 systemd[1]: cri-containerd-d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019.scope: Deactivated successfully. Jan 30 14:03:55.745844 systemd[1]: cri-containerd-d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019.scope: Consumed 5.232s CPU time, 15.9M memory peak, 0B memory swap peak. Jan 30 14:03:55.784856 containerd[2008]: time="2025-01-30T14:03:55.784728596Z" level=info msg="shim disconnected" id=d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019 namespace=k8s.io Jan 30 14:03:55.784856 containerd[2008]: time="2025-01-30T14:03:55.784807992Z" level=warning msg="cleaning up after shim disconnected" id=d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019 namespace=k8s.io Jan 30 14:03:55.784856 containerd[2008]: time="2025-01-30T14:03:55.784832232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:55.790589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019-rootfs.mount: Deactivated successfully. Jan 30 14:03:56.755499 kubelet[3502]: I0130 14:03:56.755439 3502 scope.go:117] "RemoveContainer" containerID="d65b716335e2634906fe9391694bbc7196b945bc260bb49151e41b82b4023019" Jan 30 14:03:56.758399 containerd[2008]: time="2025-01-30T14:03:56.758284409Z" level=info msg="CreateContainer within sandbox \"e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 14:03:56.785880 containerd[2008]: time="2025-01-30T14:03:56.785737089Z" level=info msg="CreateContainer within sandbox \"e2ddcf9312772ecc03eb5536f2fdb13eb2657580d2d2dea49279b0c7a749c628\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7b1e5b0e30a7187dd6b1c4f4d019cf11ae3c52f9f376b7c363eebc1616c2cabb\"" Jan 30 14:03:56.786777 containerd[2008]: time="2025-01-30T14:03:56.786511572Z" level=info msg="StartContainer for \"7b1e5b0e30a7187dd6b1c4f4d019cf11ae3c52f9f376b7c363eebc1616c2cabb\"" Jan 30 14:03:56.845621 systemd[1]: Started cri-containerd-7b1e5b0e30a7187dd6b1c4f4d019cf11ae3c52f9f376b7c363eebc1616c2cabb.scope - libcontainer container 7b1e5b0e30a7187dd6b1c4f4d019cf11ae3c52f9f376b7c363eebc1616c2cabb. Jan 30 14:03:56.918699 containerd[2008]: time="2025-01-30T14:03:56.918529117Z" level=info msg="StartContainer for \"7b1e5b0e30a7187dd6b1c4f4d019cf11ae3c52f9f376b7c363eebc1616c2cabb\" returns successfully" Jan 30 14:04:04.331726 kubelet[3502]: E0130 14:04:04.331643 3502 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-132?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"