Jan 29 11:50:17.265211 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 11:50:17.265259 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:50:17.265285 kernel: KASLR disabled due to lack of seed Jan 29 11:50:17.265303 kernel: efi: EFI v2.7 by EDK II Jan 29 11:50:17.265319 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 29 11:50:17.265335 kernel: ACPI: Early table checksum verification disabled Jan 29 11:50:17.265354 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 11:50:17.265370 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 11:50:17.265386 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 11:50:17.265402 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 11:50:17.265424 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 11:50:17.265440 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 11:50:17.265456 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 11:50:17.265472 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 11:50:17.265491 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 11:50:17.265513 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 11:50:17.265530 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 11:50:17.265547 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 11:50:17.265564 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 11:50:17.265581 kernel: printk: bootconsole [uart0] enabled Jan 29 11:50:17.265597 kernel: NUMA: Failed to initialise from firmware Jan 29 11:50:17.265614 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 11:50:17.265663 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 11:50:17.265687 kernel: Zone ranges: Jan 29 11:50:17.265705 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:50:17.265723 kernel: DMA32 empty Jan 29 11:50:17.265749 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 11:50:17.265767 kernel: Movable zone start for each node Jan 29 11:50:17.265784 kernel: Early memory node ranges Jan 29 11:50:17.265803 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 11:50:17.265820 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 11:50:17.265837 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 11:50:17.265855 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 11:50:17.265872 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 11:50:17.265890 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 11:50:17.265906 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 11:50:17.265923 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 11:50:17.265940 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 11:50:17.265963 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 11:50:17.265982 kernel: psci: probing for conduit method from ACPI. Jan 29 11:50:17.266007 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 11:50:17.266025 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:50:17.266043 kernel: psci: Trusted OS migration not required Jan 29 11:50:17.266066 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:50:17.266085 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:50:17.266103 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:50:17.266123 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:50:17.266141 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:50:17.266159 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:50:17.266177 kernel: CPU features: detected: Spectre-v2 Jan 29 11:50:17.266195 kernel: CPU features: detected: Spectre-v3a Jan 29 11:50:17.266212 kernel: CPU features: detected: Spectre-BHB Jan 29 11:50:17.266231 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 11:50:17.266249 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 11:50:17.266272 kernel: alternatives: applying boot alternatives Jan 29 11:50:17.266294 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:50:17.266314 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:50:17.266333 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:50:17.266351 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:50:17.266368 kernel: Fallback order for Node 0: 0 Jan 29 11:50:17.266387 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 11:50:17.266404 kernel: Policy zone: Normal Jan 29 11:50:17.266422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:50:17.266440 kernel: software IO TLB: area num 2. Jan 29 11:50:17.266457 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 11:50:17.266483 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 29 11:50:17.266501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:50:17.266519 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:50:17.266538 kernel: rcu: RCU event tracing is enabled. Jan 29 11:50:17.266558 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:50:17.266577 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:50:17.266595 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:50:17.266614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:50:17.268701 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:50:17.268736 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:50:17.268756 kernel: GICv3: 96 SPIs implemented Jan 29 11:50:17.268787 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:50:17.268806 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:50:17.268824 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 11:50:17.268843 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 11:50:17.268861 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 11:50:17.268880 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:50:17.268899 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:50:17.268918 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 11:50:17.268936 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 11:50:17.268954 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 11:50:17.268972 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:50:17.268990 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 11:50:17.269014 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 11:50:17.269032 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 11:50:17.269050 kernel: Console: colour dummy device 80x25 Jan 29 11:50:17.269068 kernel: printk: console [tty1] enabled Jan 29 11:50:17.269086 kernel: ACPI: Core revision 20230628 Jan 29 11:50:17.269105 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 11:50:17.269123 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:50:17.269142 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:50:17.269160 kernel: landlock: Up and running. Jan 29 11:50:17.269183 kernel: SELinux: Initializing. Jan 29 11:50:17.269202 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:50:17.269220 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:50:17.269238 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:50:17.269256 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:50:17.269274 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:50:17.269294 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:50:17.269314 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 11:50:17.269332 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 11:50:17.269355 kernel: Remapping and enabling EFI services. Jan 29 11:50:17.269374 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:50:17.269393 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:50:17.269411 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 11:50:17.269429 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 11:50:17.269447 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 11:50:17.269465 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:50:17.269483 kernel: SMP: Total of 2 processors activated. Jan 29 11:50:17.269501 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:50:17.269523 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 11:50:17.269542 kernel: CPU features: detected: CRC32 instructions Jan 29 11:50:17.269561 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:50:17.269592 kernel: alternatives: applying system-wide alternatives Jan 29 11:50:17.269615 kernel: devtmpfs: initialized Jan 29 11:50:17.269666 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:50:17.269691 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:50:17.269711 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:50:17.269730 kernel: SMBIOS 3.0.0 present. Jan 29 11:50:17.269749 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 11:50:17.269774 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:50:17.269793 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:50:17.269812 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:50:17.269831 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:50:17.269849 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:50:17.269868 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Jan 29 11:50:17.269886 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:50:17.269910 kernel: cpuidle: using governor menu Jan 29 11:50:17.269929 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:50:17.269947 kernel: ASID allocator initialised with 65536 entries Jan 29 11:50:17.269966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:50:17.269984 kernel: Serial: AMBA PL011 UART driver Jan 29 11:50:17.270003 kernel: Modules: 17520 pages in range for non-PLT usage Jan 29 11:50:17.270021 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:50:17.270040 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:50:17.270059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:50:17.270082 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:50:17.270100 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:50:17.270119 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:50:17.270138 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:50:17.270157 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:50:17.270175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:50:17.270194 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:50:17.270212 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:50:17.270230 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:50:17.270253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:50:17.270272 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:50:17.270290 kernel: ACPI: Interpreter enabled Jan 29 11:50:17.270309 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:50:17.270329 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:50:17.270348 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 11:50:17.277847 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:50:17.278176 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:50:17.278430 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:50:17.280785 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 11:50:17.281072 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 11:50:17.281106 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 11:50:17.281126 kernel: acpiphp: Slot [1] registered Jan 29 11:50:17.281146 kernel: acpiphp: Slot [2] registered Jan 29 11:50:17.281166 kernel: acpiphp: Slot [3] registered Jan 29 11:50:17.281187 kernel: acpiphp: Slot [4] registered Jan 29 11:50:17.281225 kernel: acpiphp: Slot [5] registered Jan 29 11:50:17.281246 kernel: acpiphp: Slot [6] registered Jan 29 11:50:17.281265 kernel: acpiphp: Slot [7] registered Jan 29 11:50:17.281284 kernel: acpiphp: Slot [8] registered Jan 29 11:50:17.281303 kernel: acpiphp: Slot [9] registered Jan 29 11:50:17.281322 kernel: acpiphp: Slot [10] registered Jan 29 11:50:17.281341 kernel: acpiphp: Slot [11] registered Jan 29 11:50:17.281360 kernel: acpiphp: Slot [12] registered Jan 29 11:50:17.281379 kernel: acpiphp: Slot [13] registered Jan 29 11:50:17.281398 kernel: acpiphp: Slot [14] registered Jan 29 11:50:17.281425 kernel: acpiphp: Slot [15] registered Jan 29 11:50:17.281445 kernel: acpiphp: Slot [16] registered Jan 29 11:50:17.281463 kernel: acpiphp: Slot [17] registered Jan 29 11:50:17.281482 kernel: acpiphp: Slot [18] registered Jan 29 11:50:17.281501 kernel: acpiphp: Slot [19] registered Jan 29 11:50:17.281522 kernel: acpiphp: Slot [20] registered Jan 29 11:50:17.281542 kernel: acpiphp: Slot [21] registered Jan 29 11:50:17.281562 kernel: acpiphp: Slot [22] registered Jan 29 11:50:17.281581 kernel: acpiphp: Slot [23] registered Jan 29 11:50:17.281607 kernel: acpiphp: Slot [24] registered Jan 29 11:50:17.282078 kernel: acpiphp: Slot [25] registered Jan 29 11:50:17.282120 kernel: acpiphp: Slot [26] registered Jan 29 11:50:17.282161 kernel: acpiphp: Slot [27] registered Jan 29 11:50:17.282186 kernel: acpiphp: Slot [28] registered Jan 29 11:50:17.282205 kernel: acpiphp: Slot [29] registered Jan 29 11:50:17.282225 kernel: acpiphp: Slot [30] registered Jan 29 11:50:17.282243 kernel: acpiphp: Slot [31] registered Jan 29 11:50:17.282262 kernel: PCI host bridge to bus 0000:00 Jan 29 11:50:17.282529 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 11:50:17.284610 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:50:17.284920 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 11:50:17.285143 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 11:50:17.285445 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 11:50:17.285796 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 11:50:17.286049 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 11:50:17.286326 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 11:50:17.286557 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 11:50:17.286833 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:50:17.287086 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 11:50:17.287424 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 11:50:17.289043 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 11:50:17.289324 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 11:50:17.289557 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:50:17.289903 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 11:50:17.290160 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 11:50:17.290409 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 11:50:17.290885 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 11:50:17.291159 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 11:50:17.291390 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 11:50:17.291624 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:50:17.291925 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 11:50:17.291962 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:50:17.291983 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:50:17.292004 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:50:17.292105 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:50:17.292169 kernel: iommu: Default domain type: Translated Jan 29 11:50:17.292193 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:50:17.292226 kernel: efivars: Registered efivars operations Jan 29 11:50:17.292245 kernel: vgaarb: loaded Jan 29 11:50:17.292264 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:50:17.292283 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:50:17.292302 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:50:17.292322 kernel: pnp: PnP ACPI init Jan 29 11:50:17.292615 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 11:50:17.293849 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:50:17.293948 kernel: NET: Registered PF_INET protocol family Jan 29 11:50:17.293971 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:50:17.293990 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:50:17.294010 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:50:17.294029 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:50:17.294049 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:50:17.294068 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:50:17.294087 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:50:17.294107 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:50:17.294133 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:50:17.294152 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:50:17.294171 kernel: kvm [1]: HYP mode not available Jan 29 11:50:17.294190 kernel: Initialise system trusted keyrings Jan 29 11:50:17.294210 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:50:17.294229 kernel: Key type asymmetric registered Jan 29 11:50:17.294249 kernel: Asymmetric key parser 'x509' registered Jan 29 11:50:17.294268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:50:17.294287 kernel: io scheduler mq-deadline registered Jan 29 11:50:17.294313 kernel: io scheduler kyber registered Jan 29 11:50:17.294333 kernel: io scheduler bfq registered Jan 29 11:50:17.294713 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 11:50:17.294765 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:50:17.294788 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:50:17.294808 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 11:50:17.294828 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 11:50:17.294848 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:50:17.294885 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:50:17.295165 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 11:50:17.295201 kernel: printk: console [ttyS0] disabled Jan 29 11:50:17.295221 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 11:50:17.295240 kernel: printk: console [ttyS0] enabled Jan 29 11:50:17.295259 kernel: printk: bootconsole [uart0] disabled Jan 29 11:50:17.295278 kernel: thunder_xcv, ver 1.0 Jan 29 11:50:17.295298 kernel: thunder_bgx, ver 1.0 Jan 29 11:50:17.295317 kernel: nicpf, ver 1.0 Jan 29 11:50:17.295346 kernel: nicvf, ver 1.0 Jan 29 11:50:17.295618 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:50:17.296031 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:50:16 UTC (1738151416) Jan 29 11:50:17.296067 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:50:17.296087 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 11:50:17.296107 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:50:17.296126 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:50:17.296145 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:50:17.296179 kernel: Segment Routing with IPv6 Jan 29 11:50:17.296199 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:50:17.296218 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:50:17.296237 kernel: Key type dns_resolver registered Jan 29 11:50:17.296256 kernel: registered taskstats version 1 Jan 29 11:50:17.296275 kernel: Loading compiled-in X.509 certificates Jan 29 11:50:17.296295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:50:17.296315 kernel: Key type .fscrypt registered Jan 29 11:50:17.296333 kernel: Key type fscrypt-provisioning registered Jan 29 11:50:17.296358 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:50:17.296378 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:50:17.296396 kernel: ima: No architecture policies found Jan 29 11:50:17.296416 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:50:17.296435 kernel: clk: Disabling unused clocks Jan 29 11:50:17.296454 kernel: Freeing unused kernel memory: 39360K Jan 29 11:50:17.296472 kernel: Run /init as init process Jan 29 11:50:17.296491 kernel: with arguments: Jan 29 11:50:17.296510 kernel: /init Jan 29 11:50:17.296528 kernel: with environment: Jan 29 11:50:17.296553 kernel: HOME=/ Jan 29 11:50:17.296571 kernel: TERM=linux Jan 29 11:50:17.296590 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:50:17.296615 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:50:17.297433 systemd[1]: Detected virtualization amazon. Jan 29 11:50:17.299078 systemd[1]: Detected architecture arm64. Jan 29 11:50:17.299103 systemd[1]: Running in initrd. Jan 29 11:50:17.299138 systemd[1]: No hostname configured, using default hostname. Jan 29 11:50:17.299160 systemd[1]: Hostname set to . Jan 29 11:50:17.299183 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:50:17.299205 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:50:17.299226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:50:17.299248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:50:17.299271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:50:17.299293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:50:17.299322 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:50:17.299344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:50:17.299370 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:50:17.299392 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:50:17.299413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:50:17.299434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:50:17.299455 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:50:17.299481 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:50:17.299502 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:50:17.299523 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:50:17.299544 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:50:17.299565 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:50:17.299585 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:50:17.299607 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:50:17.300142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:50:17.300187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:50:17.300223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:50:17.300245 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:50:17.300266 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:50:17.300289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:50:17.300310 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:50:17.300333 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:50:17.300355 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:50:17.300376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:50:17.300406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:50:17.300428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:50:17.300449 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:50:17.300470 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:50:17.300492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:50:17.300582 systemd-journald[251]: Collecting audit messages is disabled. Jan 29 11:50:17.300679 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:50:17.300709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:50:17.300741 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:50:17.300763 kernel: Bridge firewalling registered Jan 29 11:50:17.300785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:50:17.300808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:50:17.300830 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:50:17.300852 systemd-journald[251]: Journal started Jan 29 11:50:17.300897 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2561de25da29c659723d9601bfb846) is 8.0M, max 75.3M, 67.3M free. Jan 29 11:50:17.226334 systemd-modules-load[252]: Inserted module 'overlay' Jan 29 11:50:17.284332 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 29 11:50:17.329424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:50:17.334677 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:50:17.334828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:50:17.353219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:50:17.386205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:50:17.394740 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:50:17.399529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:50:17.417985 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:50:17.427989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:50:17.454345 dracut-cmdline[287]: dracut-dracut-053 Jan 29 11:50:17.463943 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:50:17.506801 systemd-resolved[288]: Positive Trust Anchors: Jan 29 11:50:17.506836 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:50:17.506901 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:50:17.632667 kernel: SCSI subsystem initialized Jan 29 11:50:17.639768 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:50:17.653583 kernel: iscsi: registered transport (tcp) Jan 29 11:50:17.674680 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:50:17.674750 kernel: QLogic iSCSI HBA Driver Jan 29 11:50:17.743665 kernel: random: crng init done Jan 29 11:50:17.743849 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 29 11:50:17.747070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:50:17.749310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:50:17.774160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:50:17.784006 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:50:17.822877 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:50:17.822953 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:50:17.822992 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:50:17.893710 kernel: raid6: neonx8 gen() 6618 MB/s Jan 29 11:50:17.910689 kernel: raid6: neonx4 gen() 6344 MB/s Jan 29 11:50:17.927698 kernel: raid6: neonx2 gen() 5336 MB/s Jan 29 11:50:17.944697 kernel: raid6: neonx1 gen() 3867 MB/s Jan 29 11:50:17.961696 kernel: raid6: int64x8 gen() 3742 MB/s Jan 29 11:50:17.978699 kernel: raid6: int64x4 gen() 3659 MB/s Jan 29 11:50:17.995697 kernel: raid6: int64x2 gen() 3539 MB/s Jan 29 11:50:18.013486 kernel: raid6: int64x1 gen() 2739 MB/s Jan 29 11:50:18.013556 kernel: raid6: using algorithm neonx8 gen() 6618 MB/s Jan 29 11:50:18.031452 kernel: raid6: .... xor() 4908 MB/s, rmw enabled Jan 29 11:50:18.031527 kernel: raid6: using neon recovery algorithm Jan 29 11:50:18.040296 kernel: xor: measuring software checksum speed Jan 29 11:50:18.040368 kernel: 8regs : 11022 MB/sec Jan 29 11:50:18.041485 kernel: 32regs : 11967 MB/sec Jan 29 11:50:18.043636 kernel: arm64_neon : 8869 MB/sec Jan 29 11:50:18.043689 kernel: xor: using function: 32regs (11967 MB/sec) Jan 29 11:50:18.129713 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:50:18.151793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:50:18.162975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:50:18.208029 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 29 11:50:18.217748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:50:18.231211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:50:18.267660 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 29 11:50:18.329613 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:50:18.349944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:50:18.470737 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:50:18.486972 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:50:18.539534 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:50:18.548261 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:50:18.558288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:50:18.571962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:50:18.586265 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:50:18.632662 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:50:18.676776 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:50:18.676839 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 11:50:18.703701 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 11:50:18.704054 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 11:50:18.704330 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:27:23:b0:f7:8f Jan 29 11:50:18.710949 (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:50:18.729971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:50:18.734430 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:50:18.739607 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:50:18.744163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:50:18.746917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:50:18.756415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:50:18.767366 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:50:18.767432 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 11:50:18.767982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:50:18.780704 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 11:50:18.790792 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:50:18.790860 kernel: GPT:9289727 != 16777215 Jan 29 11:50:18.790887 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:50:18.791916 kernel: GPT:9289727 != 16777215 Jan 29 11:50:18.791999 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:50:18.792029 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:50:18.803582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:50:18.822833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:50:18.865008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:50:18.915673 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (527) Jan 29 11:50:18.928684 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (532) Jan 29 11:50:18.973290 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 11:50:19.025910 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 11:50:19.043216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 11:50:19.045815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 11:50:19.076777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:50:19.088948 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:50:19.105535 disk-uuid[661]: Primary Header is updated. Jan 29 11:50:19.105535 disk-uuid[661]: Secondary Entries is updated. Jan 29 11:50:19.105535 disk-uuid[661]: Secondary Header is updated. Jan 29 11:50:19.115672 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:50:19.123696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:50:19.134673 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:50:20.137675 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:50:20.137742 disk-uuid[662]: The operation has completed successfully. Jan 29 11:50:20.333663 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:50:20.334249 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:50:20.371936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:50:20.383555 sh[1009]: Success Jan 29 11:50:20.407677 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:50:20.511885 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:50:20.522912 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:50:20.532183 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:50:20.569668 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:50:20.569730 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:50:20.569768 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:50:20.571272 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:50:20.572502 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:50:20.705673 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:50:20.728850 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:50:20.732694 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:50:20.744865 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:50:20.752909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:50:20.778058 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:50:20.778144 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:50:20.778177 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:50:20.786677 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:50:20.804243 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:50:20.808756 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:50:20.820015 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:50:20.829985 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:50:20.962356 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:50:20.999918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:50:21.043032 systemd-networkd[1201]: lo: Link UP Jan 29 11:50:21.043046 systemd-networkd[1201]: lo: Gained carrier Jan 29 11:50:21.047327 systemd-networkd[1201]: Enumeration completed Jan 29 11:50:21.048109 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:50:21.048116 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:50:21.049764 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:50:21.058412 systemd[1]: Reached target network.target - Network. Jan 29 11:50:21.060893 systemd-networkd[1201]: eth0: Link UP Jan 29 11:50:21.060901 systemd-networkd[1201]: eth0: Gained carrier Jan 29 11:50:21.060919 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:50:21.079809 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.25.252/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:50:21.246499 ignition[1104]: Ignition 2.19.0 Jan 29 11:50:21.247007 ignition[1104]: Stage: fetch-offline Jan 29 11:50:21.247550 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:21.247575 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:21.248084 ignition[1104]: Ignition finished successfully Jan 29 11:50:21.257720 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:50:21.269005 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:50:21.294272 ignition[1212]: Ignition 2.19.0 Jan 29 11:50:21.294302 ignition[1212]: Stage: fetch Jan 29 11:50:21.295947 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:21.295974 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:21.297003 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:21.308492 ignition[1212]: PUT result: OK Jan 29 11:50:21.311812 ignition[1212]: parsed url from cmdline: "" Jan 29 11:50:21.311834 ignition[1212]: no config URL provided Jan 29 11:50:21.311849 ignition[1212]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:50:21.311903 ignition[1212]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:50:21.311937 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:21.315335 ignition[1212]: PUT result: OK Jan 29 11:50:21.315418 ignition[1212]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 11:50:21.322786 ignition[1212]: GET result: OK Jan 29 11:50:21.323135 ignition[1212]: parsing config with SHA512: 8ecb7283864f65364e9772d1772cb9c5df4f598cd9419fca72d2c59123fe7df0ad4c89d9f2b1932262bcbe62514279aea0355dc9e6606e891e12c73614534ef2 Jan 29 11:50:21.332328 unknown[1212]: fetched base config from "system" Jan 29 11:50:21.333006 ignition[1212]: fetch: fetch complete Jan 29 11:50:21.332351 unknown[1212]: fetched base config from "system" Jan 29 11:50:21.333017 ignition[1212]: fetch: fetch passed Jan 29 11:50:21.332364 unknown[1212]: fetched user config from "aws" Jan 29 11:50:21.333097 ignition[1212]: Ignition finished successfully Jan 29 11:50:21.344796 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:50:21.365050 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:50:21.388503 ignition[1218]: Ignition 2.19.0 Jan 29 11:50:21.388532 ignition[1218]: Stage: kargs Jan 29 11:50:21.390224 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:21.390473 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:21.390806 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:21.397030 ignition[1218]: PUT result: OK Jan 29 11:50:21.403330 ignition[1218]: kargs: kargs passed Jan 29 11:50:21.403426 ignition[1218]: Ignition finished successfully Jan 29 11:50:21.408336 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:50:21.417927 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:50:21.455492 ignition[1224]: Ignition 2.19.0 Jan 29 11:50:21.455522 ignition[1224]: Stage: disks Jan 29 11:50:21.457180 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:21.457212 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:21.457376 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:21.460707 ignition[1224]: PUT result: OK Jan 29 11:50:21.468730 ignition[1224]: disks: disks passed Jan 29 11:50:21.468827 ignition[1224]: Ignition finished successfully Jan 29 11:50:21.472789 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:50:21.475406 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:50:21.477933 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:50:21.486157 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:50:21.488010 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:50:21.489835 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:50:21.505941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:50:21.549838 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:50:21.554441 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:50:21.574018 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:50:21.659688 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:50:21.662108 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:50:21.665173 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:50:21.683841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:50:21.689873 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:50:21.699155 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:50:21.699252 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:50:21.699306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:50:21.715266 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:50:21.721519 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:50:21.752680 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Jan 29 11:50:21.757255 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:50:21.757331 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:50:21.758579 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:50:21.773692 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:50:21.776585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:50:22.088253 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:50:22.123262 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:50:22.132088 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:50:22.141174 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:50:22.520564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:50:22.535956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:50:22.542615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:50:22.560296 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:50:22.562241 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:50:22.608813 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:50:22.614318 ignition[1364]: INFO : Ignition 2.19.0 Jan 29 11:50:22.614318 ignition[1364]: INFO : Stage: mount Jan 29 11:50:22.619552 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:22.619552 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:22.619552 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:22.626778 ignition[1364]: INFO : PUT result: OK Jan 29 11:50:22.631275 ignition[1364]: INFO : mount: mount passed Jan 29 11:50:22.632984 ignition[1364]: INFO : Ignition finished successfully Jan 29 11:50:22.637143 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:50:22.647999 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:50:22.674870 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:50:22.697734 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Jan 29 11:50:22.701850 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:50:22.701914 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:50:22.701942 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:50:22.708724 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:50:22.712935 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:50:22.760262 ignition[1392]: INFO : Ignition 2.19.0 Jan 29 11:50:22.760262 ignition[1392]: INFO : Stage: files Jan 29 11:50:22.763799 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:22.763799 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:22.763799 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:22.770482 ignition[1392]: INFO : PUT result: OK Jan 29 11:50:22.776317 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:50:22.790083 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:50:22.790083 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:50:22.815826 systemd-networkd[1201]: eth0: Gained IPv6LL Jan 29 11:50:22.846892 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:50:22.849787 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:50:22.852923 unknown[1392]: wrote ssh authorized keys file for user: core Jan 29 11:50:22.855169 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:50:22.859073 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:50:22.859073 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 11:50:22.947898 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:50:23.092782 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:50:23.092782 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:50:23.099578 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:50:23.491942 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:50:23.856577 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:50:23.856577 ignition[1392]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:50:23.863348 ignition[1392]: INFO : files: files passed Jan 29 11:50:23.863348 ignition[1392]: INFO : Ignition finished successfully Jan 29 11:50:23.889690 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:50:23.910140 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:50:23.916970 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:50:23.928845 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:50:23.929074 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:50:23.951449 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:50:23.954919 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:50:23.958543 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:50:23.964019 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:50:23.967192 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:50:23.987069 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:50:24.041800 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:50:24.042971 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:50:24.049858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:50:24.051997 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:50:24.055759 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:50:24.073073 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:50:24.100122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:50:24.117956 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:50:24.145036 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:50:24.149172 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:50:24.152412 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:50:24.155472 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:50:24.155745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:50:24.158466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:50:24.161752 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:50:24.163935 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:50:24.166117 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:50:24.168471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:50:24.170788 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:50:24.172919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:50:24.175392 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:50:24.177548 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:50:24.179673 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:50:24.181404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:50:24.181666 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:50:24.184274 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:50:24.186565 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:50:24.189004 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:50:24.205729 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:50:24.208520 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:50:24.208804 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:50:24.211314 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:50:24.211560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:50:24.283578 ignition[1444]: INFO : Ignition 2.19.0 Jan 29 11:50:24.283578 ignition[1444]: INFO : Stage: umount Jan 29 11:50:24.283578 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:50:24.283578 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:50:24.283578 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:50:24.214313 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:50:24.306888 ignition[1444]: INFO : PUT result: OK Jan 29 11:50:24.306888 ignition[1444]: INFO : umount: umount passed Jan 29 11:50:24.306888 ignition[1444]: INFO : Ignition finished successfully Jan 29 11:50:24.214531 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:50:24.227274 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:50:24.253037 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:50:24.274793 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:50:24.275132 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:50:24.287206 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:50:24.287443 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:50:24.313052 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:50:24.315251 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:50:24.320406 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:50:24.320623 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:50:24.324446 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:50:24.324545 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:50:24.329595 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:50:24.329783 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:50:24.333775 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:50:24.333884 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:50:24.335883 systemd[1]: Stopped target network.target - Network. Jan 29 11:50:24.338424 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:50:24.340695 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:50:24.353394 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:50:24.353466 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:50:24.363760 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:50:24.367309 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:50:24.371005 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:50:24.374963 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:50:24.375049 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:50:24.379533 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:50:24.379622 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:50:24.399212 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:50:24.399325 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:50:24.401825 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:50:24.401922 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:50:24.404207 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:50:24.406670 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:50:24.412509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:50:24.414110 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:50:24.414333 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:50:24.419446 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jan 29 11:50:24.438870 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:50:24.439113 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:50:24.443509 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:50:24.444173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:50:24.451036 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:50:24.451162 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:50:24.465133 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:50:24.478370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:50:24.478486 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:50:24.481058 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:50:24.484734 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:50:24.484953 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:50:24.530203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:50:24.531605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:50:24.539943 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:50:24.540078 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:50:24.542479 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:50:24.542596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:50:24.557781 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:50:24.559915 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:50:24.564319 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:50:24.564479 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:50:24.569346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:50:24.569444 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:50:24.572248 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:50:24.572367 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:50:24.586509 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:50:24.586624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:50:24.589178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:50:24.589290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:50:24.608422 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:50:24.612543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:50:24.612867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:50:24.624212 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:50:24.624342 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:50:24.628815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:50:24.628927 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:50:24.630066 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:50:24.630166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:50:24.634218 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:50:24.635699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:50:24.637118 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:50:24.637347 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:50:24.639434 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:50:24.642961 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:50:24.673252 systemd[1]: Switching root. Jan 29 11:50:24.723128 systemd-journald[251]: Journal stopped Jan 29 11:50:27.344290 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 29 11:50:27.344438 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:50:27.344485 kernel: SELinux: policy capability open_perms=1 Jan 29 11:50:27.344518 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:50:27.344550 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:50:27.344582 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:50:27.344615 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:50:27.344689 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:50:27.344732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:50:27.344766 kernel: audit: type=1403 audit(1738151425.371:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:50:27.344811 systemd[1]: Successfully loaded SELinux policy in 61.991ms. Jan 29 11:50:27.344861 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.042ms. Jan 29 11:50:27.344899 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:50:27.344939 systemd[1]: Detected virtualization amazon. Jan 29 11:50:27.344973 systemd[1]: Detected architecture arm64. Jan 29 11:50:27.345007 systemd[1]: Detected first boot. Jan 29 11:50:27.345043 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:50:27.345083 zram_generator::config[1486]: No configuration found. Jan 29 11:50:27.345120 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:50:27.345152 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:50:27.345188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:50:27.345221 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:50:27.345258 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:50:27.345293 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:50:27.345326 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:50:27.345364 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:50:27.345397 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:50:27.345429 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:50:27.345461 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:50:27.345493 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:50:27.345525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:50:27.345557 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:50:27.345589 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:50:27.345621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:50:27.349789 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:50:27.349870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:50:27.349903 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:50:27.349935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:50:27.349969 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:50:27.350002 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:50:27.350036 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:50:27.350073 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:50:27.350108 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:50:27.350144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:50:27.350178 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:50:27.350210 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:50:27.350243 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:50:27.350281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:50:27.350316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:50:27.350351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:50:27.350392 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:50:27.350423 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:50:27.350465 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:50:27.350496 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:50:27.350528 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:50:27.350562 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:50:27.350595 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:50:27.350710 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:50:27.350757 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:50:27.351840 systemd[1]: Reached target machines.target - Containers. Jan 29 11:50:27.351885 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:50:27.351920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:50:27.351955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:50:27.351990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:50:27.352021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:50:27.352054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:50:27.352085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:50:27.352118 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:50:27.352158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:50:27.352192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:50:27.352223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:50:27.352257 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:50:27.352293 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:50:27.352324 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:50:27.352354 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:50:27.352384 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:50:27.352421 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:50:27.352452 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:50:27.352483 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:50:27.352515 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:50:27.352550 systemd[1]: Stopped verity-setup.service. Jan 29 11:50:27.352582 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:50:27.352616 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:50:27.359951 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:50:27.360000 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:50:27.360033 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:50:27.360065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:50:27.360103 kernel: fuse: init (API version 7.39) Jan 29 11:50:27.360135 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:50:27.360167 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:50:27.360208 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:50:27.360241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:50:27.360272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:50:27.360303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:50:27.360337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:50:27.360367 kernel: ACPI: bus type drm_connector registered Jan 29 11:50:27.360397 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:50:27.360428 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:50:27.360458 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:50:27.360496 kernel: loop: module loaded Jan 29 11:50:27.360530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:50:27.360562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:50:27.360593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:50:27.360624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:50:27.360706 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:50:27.360741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:50:27.360772 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:50:27.360871 systemd-journald[1575]: Collecting audit messages is disabled. Jan 29 11:50:27.360925 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:50:27.360962 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:50:27.360996 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:50:27.361032 systemd-journald[1575]: Journal started Jan 29 11:50:27.361079 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2561de25da29c659723d9601bfb846) is 8.0M, max 75.3M, 67.3M free. Jan 29 11:50:26.631409 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:50:27.367858 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:50:27.367941 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:50:26.709239 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 11:50:26.710364 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:50:27.387725 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:50:27.402701 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:50:27.406675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:50:27.418722 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:50:27.418817 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:50:27.436924 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:50:27.437026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:50:27.451643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:50:27.463067 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:50:27.476878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:50:27.486584 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:50:27.489745 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:50:27.493220 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:50:27.496758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:50:27.499802 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:50:27.517779 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:50:27.572469 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:50:27.602933 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:50:27.609502 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:50:27.653796 kernel: loop0: detected capacity change from 0 to 52536 Jan 29 11:50:27.668944 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2561de25da29c659723d9601bfb846 is 68.114ms for 912 entries. Jan 29 11:50:27.668944 systemd-journald[1575]: System Journal (/var/log/journal/ec2561de25da29c659723d9601bfb846) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:50:27.753740 systemd-journald[1575]: Received client request to flush runtime journal. Jan 29 11:50:27.672058 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jan 29 11:50:27.672083 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jan 29 11:50:27.672341 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:50:27.674356 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:50:27.694816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:50:27.698143 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:50:27.715984 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:50:27.761604 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:50:27.791348 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:50:27.804561 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:50:27.820080 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:50:27.825862 kernel: loop1: detected capacity change from 0 to 201592 Jan 29 11:50:27.860002 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:50:27.873075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:50:27.902870 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:50:27.932700 kernel: loop2: detected capacity change from 0 to 114328 Jan 29 11:50:27.936476 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 29 11:50:27.936512 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 29 11:50:27.948056 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:50:28.065438 kernel: loop3: detected capacity change from 0 to 114432 Jan 29 11:50:28.178667 kernel: loop4: detected capacity change from 0 to 52536 Jan 29 11:50:28.192105 kernel: loop5: detected capacity change from 0 to 201592 Jan 29 11:50:28.231694 kernel: loop6: detected capacity change from 0 to 114328 Jan 29 11:50:28.250718 kernel: loop7: detected capacity change from 0 to 114432 Jan 29 11:50:28.262323 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 11:50:28.263550 (sd-merge)[1643]: Merged extensions into '/usr'. Jan 29 11:50:28.270874 systemd[1]: Reloading requested from client PID 1597 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:50:28.270907 systemd[1]: Reloading... Jan 29 11:50:28.477674 zram_generator::config[1672]: No configuration found. Jan 29 11:50:28.863838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:50:28.992006 systemd[1]: Reloading finished in 720 ms. Jan 29 11:50:29.030355 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:50:29.033500 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:50:29.055161 systemd[1]: Starting ensure-sysext.service... Jan 29 11:50:29.061035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:50:29.068064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:50:29.092779 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:50:29.092829 systemd[1]: Reloading... Jan 29 11:50:29.192726 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:50:29.193459 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:50:29.205063 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:50:29.205879 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Jan 29 11:50:29.206204 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Jan 29 11:50:29.219774 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:50:29.219801 systemd-tmpfiles[1722]: Skipping /boot Jan 29 11:50:29.252532 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:50:29.252749 systemd-tmpfiles[1722]: Skipping /boot Jan 29 11:50:29.269526 systemd-udevd[1723]: Using default interface naming scheme 'v255'. Jan 29 11:50:29.331849 ldconfig[1593]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:50:29.348703 zram_generator::config[1749]: No configuration found. Jan 29 11:50:29.598039 (udev-worker)[1763]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:50:29.768554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:50:29.809688 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1790) Jan 29 11:50:29.940255 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:50:29.942959 systemd[1]: Reloading finished in 849 ms. Jan 29 11:50:30.004089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:50:30.007270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:50:30.024819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:50:30.073578 systemd[1]: Finished ensure-sysext.service. Jan 29 11:50:30.110179 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:50:30.125078 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:50:30.127696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:50:30.133987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:50:30.141928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:50:30.148048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:50:30.160089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:50:30.162475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:50:30.178817 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:50:30.190020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:50:30.200983 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:50:30.203083 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:50:30.210978 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:50:30.218987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:50:30.249986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:50:30.253841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:50:30.275573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:50:30.288865 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:50:30.307443 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:50:30.333925 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:50:30.348303 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:50:30.360483 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:50:30.374936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:50:30.375265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:50:30.385076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:50:30.386272 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:50:30.388188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:50:30.395772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:50:30.398042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:50:30.400959 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:50:30.427829 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:50:30.435459 augenrules[1954]: No rules Jan 29 11:50:30.443709 lvm[1944]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:50:30.440991 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:50:30.446759 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:50:30.467202 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:50:30.513454 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:50:30.522093 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:50:30.524935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:50:30.541007 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:50:30.551938 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:50:30.554715 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:50:30.571178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:50:30.580295 lvm[1966]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:50:30.646251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:50:30.659796 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:50:30.722463 systemd-resolved[1932]: Positive Trust Anchors: Jan 29 11:50:30.722505 systemd-resolved[1932]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:50:30.722568 systemd-resolved[1932]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:50:30.730828 systemd-resolved[1932]: Defaulting to hostname 'linux'. Jan 29 11:50:30.733948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:50:30.736367 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:50:30.738882 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:50:30.738943 systemd-networkd[1931]: lo: Link UP Jan 29 11:50:30.738953 systemd-networkd[1931]: lo: Gained carrier Jan 29 11:50:30.741935 systemd-networkd[1931]: Enumeration completed Jan 29 11:50:30.742814 systemd-networkd[1931]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:50:30.742822 systemd-networkd[1931]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:50:30.745923 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:50:30.747254 systemd-networkd[1931]: eth0: Link UP Jan 29 11:50:30.747761 systemd-networkd[1931]: eth0: Gained carrier Jan 29 11:50:30.747799 systemd-networkd[1931]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:50:30.760547 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:50:30.763215 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:50:30.765607 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:50:30.768215 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:50:30.770578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:50:30.770661 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:50:30.772412 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:50:30.775601 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:50:30.780547 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:50:30.787894 systemd-networkd[1931]: eth0: DHCPv4 address 172.31.25.252/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:50:30.791740 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:50:30.794928 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:50:30.797546 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:50:30.799994 systemd[1]: Reached target network.target - Network. Jan 29 11:50:30.801805 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:50:30.803597 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:50:30.805405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:50:30.805460 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:50:30.817074 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:50:30.828819 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:50:30.834055 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:50:30.840951 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:50:30.852995 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:50:30.855993 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:50:30.866044 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:50:30.872891 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 11:50:30.890941 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:50:30.902200 jq[1984]: false Jan 29 11:50:30.907864 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 11:50:30.924391 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:50:30.930944 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:50:30.942957 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:50:30.950002 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:50:30.953500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:50:30.955466 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:50:30.977131 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:50:30.982596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:50:30.995002 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:50:31.009569 dbus-daemon[1983]: [system] SELinux support is enabled Jan 29 11:50:31.014852 extend-filesystems[1985]: Found loop4 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found loop5 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found loop6 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found loop7 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p1 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p2 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p3 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found usr Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p4 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p6 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p7 Jan 29 11:50:31.014852 extend-filesystems[1985]: Found nvme0n1p9 Jan 29 11:50:31.014852 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Jan 29 11:50:30.997755 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:50:31.021375 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1931 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 11:50:31.086402 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Jan 29 11:50:31.093970 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:50:31.104414 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:50:31.104791 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:50:31.111662 extend-filesystems[2016]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:50:31.126679 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 11:50:31.127845 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 29 11:50:31.127908 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: ---------------------------------------------------- Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: corporation. Support and training for ntp-4 are Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: available at https://www.nwtime.org/support Jan 29 11:50:31.128410 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: ---------------------------------------------------- Jan 29 11:50:31.127930 ntpd[1987]: ---------------------------------------------------- Jan 29 11:50:31.130529 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:50:31.127950 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:50:31.127970 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:50:31.127988 ntpd[1987]: corporation. Support and training for ntp-4 are Jan 29 11:50:31.128007 ntpd[1987]: available at https://www.nwtime.org/support Jan 29 11:50:31.128026 ntpd[1987]: ---------------------------------------------------- Jan 29 11:50:31.137986 jq[1999]: true Jan 29 11:50:31.132112 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:50:31.143483 ntpd[1987]: proto: precision = 0.108 usec (-23) Jan 29 11:50:31.143940 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: proto: precision = 0.108 usec (-23) Jan 29 11:50:31.144042 ntpd[1987]: basedate set to 2025-01-17 Jan 29 11:50:31.144822 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: basedate set to 2025-01-17 Jan 29 11:50:31.144822 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: gps base set to 2025-01-19 (week 2350) Jan 29 11:50:31.144087 ntpd[1987]: gps base set to 2025-01-19 (week 2350) Jan 29 11:50:31.167358 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:50:31.169032 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:50:31.169032 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:50:31.167475 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:50:31.173216 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Listen normally on 3 eth0 172.31.25.252:123 Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Listen normally on 4 lo [::1]:123 Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: bind(21) AF_INET6 fe80::427:23ff:feb0:f78f%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: unable to create socket on eth0 (5) for fe80::427:23ff:feb0:f78f%2#123 Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: failed to init interface for address fe80::427:23ff:feb0:f78f%2 Jan 29 11:50:31.192559 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Jan 29 11:50:31.191985 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:50:31.176014 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:50:31.192075 ntpd[1987]: Listen normally on 3 eth0 172.31.25.252:123 Jan 29 11:50:31.176067 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:50:31.192145 ntpd[1987]: Listen normally on 4 lo [::1]:123 Jan 29 11:50:31.179884 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:50:31.192231 ntpd[1987]: bind(21) AF_INET6 fe80::427:23ff:feb0:f78f%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:50:31.179930 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:50:31.192283 ntpd[1987]: unable to create socket on eth0 (5) for fe80::427:23ff:feb0:f78f%2#123 Jan 29 11:50:31.192320 ntpd[1987]: failed to init interface for address fe80::427:23ff:feb0:f78f%2 Jan 29 11:50:31.192394 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Jan 29 11:50:31.199161 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:50:31.210221 update_engine[1997]: I20250129 11:50:31.203457 1997 main.cc:92] Flatcar Update Engine starting Jan 29 11:50:31.224882 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 11:50:31.234731 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:50:31.235012 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:50:31.235012 ntpd[1987]: 29 Jan 11:50:31 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:50:31.234807 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:50:31.239849 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 11:50:31.242675 update_engine[1997]: I20250129 11:50:31.240886 1997 update_check_scheduler.cc:74] Next update check in 8m49s Jan 29 11:50:31.250887 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:50:31.258788 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:50:31.263683 tar[2002]: linux-arm64/LICENSE Jan 29 11:50:31.263683 tar[2002]: linux-arm64/helm Jan 29 11:50:31.264449 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:50:31.276976 extend-filesystems[2016]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 11:50:31.276976 extend-filesystems[2016]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:50:31.276976 extend-filesystems[2016]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 11:50:31.269393 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:50:31.285354 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Jan 29 11:50:31.273937 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:50:31.305855 jq[2023]: true Jan 29 11:50:31.343758 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 11:50:31.493099 coreos-metadata[1982]: Jan 29 11:50:31.492 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:50:31.493099 coreos-metadata[1982]: Jan 29 11:50:31.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 11:50:31.493099 coreos-metadata[1982]: Jan 29 11:50:31.492 INFO Fetch successful Jan 29 11:50:31.493099 coreos-metadata[1982]: Jan 29 11:50:31.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 11:50:31.497413 coreos-metadata[1982]: Jan 29 11:50:31.494 INFO Fetch successful Jan 29 11:50:31.497413 coreos-metadata[1982]: Jan 29 11:50:31.494 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 11:50:31.499662 coreos-metadata[1982]: Jan 29 11:50:31.498 INFO Fetch successful Jan 29 11:50:31.499662 coreos-metadata[1982]: Jan 29 11:50:31.498 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 11:50:31.503709 coreos-metadata[1982]: Jan 29 11:50:31.503 INFO Fetch successful Jan 29 11:50:31.503709 coreos-metadata[1982]: Jan 29 11:50:31.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 11:50:31.503952 coreos-metadata[1982]: Jan 29 11:50:31.503 INFO Fetch failed with 404: resource not found Jan 29 11:50:31.503952 coreos-metadata[1982]: Jan 29 11:50:31.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 11:50:31.504615 coreos-metadata[1982]: Jan 29 11:50:31.504 INFO Fetch successful Jan 29 11:50:31.504615 coreos-metadata[1982]: Jan 29 11:50:31.504 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 11:50:31.511545 coreos-metadata[1982]: Jan 29 11:50:31.511 INFO Fetch successful Jan 29 11:50:31.511545 coreos-metadata[1982]: Jan 29 11:50:31.511 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 11:50:31.513140 coreos-metadata[1982]: Jan 29 11:50:31.512 INFO Fetch successful Jan 29 11:50:31.513140 coreos-metadata[1982]: Jan 29 11:50:31.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 11:50:31.513140 coreos-metadata[1982]: Jan 29 11:50:31.512 INFO Fetch successful Jan 29 11:50:31.513140 coreos-metadata[1982]: Jan 29 11:50:31.513 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 11:50:31.522578 coreos-metadata[1982]: Jan 29 11:50:31.516 INFO Fetch successful Jan 29 11:50:31.534661 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1778) Jan 29 11:50:31.565658 bash[2069]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:50:31.571771 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:50:31.584049 systemd[1]: Starting sshkeys.service... Jan 29 11:50:31.629722 systemd-logind[1995]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:50:31.629766 systemd-logind[1995]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 11:50:31.635096 systemd-logind[1995]: New seat seat0. Jan 29 11:50:31.710996 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:50:31.775730 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:50:31.778672 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:50:31.797134 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:50:31.813339 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:50:31.894223 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 11:50:31.894526 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 11:50:31.901898 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2033 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 11:50:31.913969 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 11:50:31.979156 containerd[2021]: time="2025-01-29T11:50:31.974913780Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:50:31.977519 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:50:32.069216 polkitd[2129]: Started polkitd version 121 Jan 29 11:50:32.096402 polkitd[2129]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 11:50:32.096544 polkitd[2129]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 11:50:32.104693 polkitd[2129]: Finished loading, compiling and executing 2 rules Jan 29 11:50:32.107469 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 11:50:32.107843 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 11:50:32.109978 polkitd[2129]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 11:50:32.129257 ntpd[1987]: bind(24) AF_INET6 fe80::427:23ff:feb0:f78f%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:50:32.132542 ntpd[1987]: 29 Jan 11:50:32 ntpd[1987]: bind(24) AF_INET6 fe80::427:23ff:feb0:f78f%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:50:32.132542 ntpd[1987]: 29 Jan 11:50:32 ntpd[1987]: unable to create socket on eth0 (6) for fe80::427:23ff:feb0:f78f%2#123 Jan 29 11:50:32.132542 ntpd[1987]: 29 Jan 11:50:32 ntpd[1987]: failed to init interface for address fe80::427:23ff:feb0:f78f%2 Jan 29 11:50:32.129330 ntpd[1987]: unable to create socket on eth0 (6) for fe80::427:23ff:feb0:f78f%2#123 Jan 29 11:50:32.129360 ntpd[1987]: failed to init interface for address fe80::427:23ff:feb0:f78f%2 Jan 29 11:50:32.204812 systemd-hostnamed[2033]: Hostname set to (transient) Jan 29 11:50:32.205004 systemd-resolved[1932]: System hostname changed to 'ip-172-31-25-252'. Jan 29 11:50:32.210215 coreos-metadata[2121]: Jan 29 11:50:32.209 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:50:32.212468 coreos-metadata[2121]: Jan 29 11:50:32.211 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 11:50:32.214996 coreos-metadata[2121]: Jan 29 11:50:32.213 INFO Fetch successful Jan 29 11:50:32.214996 coreos-metadata[2121]: Jan 29 11:50:32.214 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 11:50:32.216534 coreos-metadata[2121]: Jan 29 11:50:32.216 INFO Fetch successful Jan 29 11:50:32.219573 unknown[2121]: wrote ssh authorized keys file for user: core Jan 29 11:50:32.224295 containerd[2021]: time="2025-01-29T11:50:32.223096678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.239919 containerd[2021]: time="2025-01-29T11:50:32.238728874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.240153490Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.240220126Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.241299358Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.241370074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.241537246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.241573582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.242006854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.242050882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.242082994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.242110006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.242774 containerd[2021]: time="2025-01-29T11:50:32.242333098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.246930 containerd[2021]: time="2025-01-29T11:50:32.246863950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:50:32.254549 containerd[2021]: time="2025-01-29T11:50:32.252745222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:50:32.254549 containerd[2021]: time="2025-01-29T11:50:32.254402938Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:50:32.257826 containerd[2021]: time="2025-01-29T11:50:32.257085478Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:50:32.258320 containerd[2021]: time="2025-01-29T11:50:32.258237982Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:50:32.270534 containerd[2021]: time="2025-01-29T11:50:32.270466750Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:50:32.270858 containerd[2021]: time="2025-01-29T11:50:32.270799354Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:50:32.274664 containerd[2021]: time="2025-01-29T11:50:32.272721790Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:50:32.274664 containerd[2021]: time="2025-01-29T11:50:32.272785726Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:50:32.274664 containerd[2021]: time="2025-01-29T11:50:32.272823610Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:50:32.274664 containerd[2021]: time="2025-01-29T11:50:32.273131314Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:50:32.276249 containerd[2021]: time="2025-01-29T11:50:32.276158986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:50:32.278711 containerd[2021]: time="2025-01-29T11:50:32.277443562Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:50:32.278711 containerd[2021]: time="2025-01-29T11:50:32.277529458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:50:32.278711 containerd[2021]: time="2025-01-29T11:50:32.277575310Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:50:32.278711 containerd[2021]: time="2025-01-29T11:50:32.277620430Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.282755998Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.282858790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.282942706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.282994450Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283041646Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283088698Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283121770Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283194934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283235554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283276942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283321630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283363690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283410514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.286610 containerd[2021]: time="2025-01-29T11:50:32.283455538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283491370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283534894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283585906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283673722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283724614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283787902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283855330Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283922482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.283963846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.284001346Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.284145862Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.284198194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:50:32.287339 containerd[2021]: time="2025-01-29T11:50:32.284238862Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:50:32.289117 containerd[2021]: time="2025-01-29T11:50:32.284283442Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:50:32.289117 containerd[2021]: time="2025-01-29T11:50:32.284319310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.289117 containerd[2021]: time="2025-01-29T11:50:32.284355634Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:50:32.289117 containerd[2021]: time="2025-01-29T11:50:32.284392198Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:50:32.289117 containerd[2021]: time="2025-01-29T11:50:32.284428366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:50:32.289343 containerd[2021]: time="2025-01-29T11:50:32.288256474Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:50:32.289343 containerd[2021]: time="2025-01-29T11:50:32.288485890Z" level=info msg="Connect containerd service" Jan 29 11:50:32.289343 containerd[2021]: time="2025-01-29T11:50:32.288663742Z" level=info msg="using legacy CRI server" Jan 29 11:50:32.289343 containerd[2021]: time="2025-01-29T11:50:32.288702790Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:50:32.289343 containerd[2021]: time="2025-01-29T11:50:32.288939298Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:50:32.295153 containerd[2021]: time="2025-01-29T11:50:32.294131998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:50:32.295153 containerd[2021]: time="2025-01-29T11:50:32.294815194Z" level=info msg="Start subscribing containerd event" Jan 29 11:50:32.295153 containerd[2021]: time="2025-01-29T11:50:32.294920158Z" level=info msg="Start recovering state" Jan 29 11:50:32.297190 containerd[2021]: time="2025-01-29T11:50:32.295943242Z" level=info msg="Start event monitor" Jan 29 11:50:32.297190 containerd[2021]: time="2025-01-29T11:50:32.295992790Z" level=info msg="Start snapshots syncer" Jan 29 11:50:32.297190 containerd[2021]: time="2025-01-29T11:50:32.296017222Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:50:32.297190 containerd[2021]: time="2025-01-29T11:50:32.296036566Z" level=info msg="Start streaming server" Jan 29 11:50:32.297190 containerd[2021]: time="2025-01-29T11:50:32.294925042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:50:32.297190 containerd[2021]: time="2025-01-29T11:50:32.296376442Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:50:32.301431 containerd[2021]: time="2025-01-29T11:50:32.300792130Z" level=info msg="containerd successfully booted in 0.339776s" Jan 29 11:50:32.306459 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:50:32.319139 update-ssh-keys[2178]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:50:32.321715 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:50:32.332145 systemd[1]: Finished sshkeys.service. Jan 29 11:50:32.435607 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:50:32.479871 systemd-networkd[1931]: eth0: Gained IPv6LL Jan 29 11:50:32.485544 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:50:32.490400 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:50:32.497815 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:50:32.509147 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 11:50:32.519179 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:50:32.527053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:50:32.539124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:50:32.544223 systemd[1]: Started sshd@0-172.31.25.252:22-139.178.89.65:38804.service - OpenSSH per-connection server daemon (139.178.89.65:38804). Jan 29 11:50:32.601181 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:50:32.601586 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:50:32.616791 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:50:32.692186 amazon-ssm-agent[2197]: Initializing new seelog logger Jan 29 11:50:32.692186 amazon-ssm-agent[2197]: New Seelog Logger Creation Complete Jan 29 11:50:32.692186 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.692186 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.694562 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:50:32.699856 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:50:32.705360 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 processing appconfig overrides Jan 29 11:50:32.712212 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.712212 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.714669 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 processing appconfig overrides Jan 29 11:50:32.714669 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO Proxy environment variables: Jan 29 11:50:32.714669 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.714669 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.714669 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 processing appconfig overrides Jan 29 11:50:32.720947 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:50:32.729674 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.729674 amazon-ssm-agent[2197]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:50:32.729674 amazon-ssm-agent[2197]: 2025/01/29 11:50:32 processing appconfig overrides Jan 29 11:50:32.732241 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:50:32.737237 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:50:32.813800 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO http_proxy: Jan 29 11:50:32.911438 sshd[2201]: Accepted publickey for core from 139.178.89.65 port 38804 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:32.913969 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO no_proxy: Jan 29 11:50:32.918859 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:32.950028 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:50:32.959162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:50:32.970502 systemd-logind[1995]: New session 1 of user core. Jan 29 11:50:33.011711 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO https_proxy: Jan 29 11:50:33.012745 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:50:33.029867 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:50:33.054896 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:50:33.110067 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO Checking if agent identity type OnPrem can be assumed Jan 29 11:50:33.210786 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO Checking if agent identity type EC2 can be assumed Jan 29 11:50:33.312662 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO Agent will take identity from EC2 Jan 29 11:50:33.392866 systemd[2226]: Queued start job for default target default.target. Jan 29 11:50:33.399302 systemd[2226]: Created slice app.slice - User Application Slice. Jan 29 11:50:33.399359 systemd[2226]: Reached target paths.target - Paths. Jan 29 11:50:33.399392 systemd[2226]: Reached target timers.target - Timers. Jan 29 11:50:33.414733 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:50:33.412957 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:50:33.446986 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:50:33.447227 systemd[2226]: Reached target sockets.target - Sockets. Jan 29 11:50:33.447260 systemd[2226]: Reached target basic.target - Basic System. Jan 29 11:50:33.447347 systemd[2226]: Reached target default.target - Main User Target. Jan 29 11:50:33.447412 systemd[2226]: Startup finished in 377ms. Jan 29 11:50:33.448478 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:50:33.458962 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:50:33.505796 tar[2002]: linux-arm64/README.md Jan 29 11:50:33.510597 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:50:33.542726 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:50:33.610718 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:50:33.637156 systemd[1]: Started sshd@1-172.31.25.252:22-139.178.89.65:37850.service - OpenSSH per-connection server daemon (139.178.89.65:37850). Jan 29 11:50:33.708942 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 11:50:33.809945 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 11:50:33.876666 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 37850 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:33.879606 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:33.895072 systemd-logind[1995]: New session 2 of user core. Jan 29 11:50:33.899948 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:50:33.911147 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 11:50:34.014900 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 11:50:34.045373 sshd[2242]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:34.051939 systemd[1]: sshd@1-172.31.25.252:22-139.178.89.65:37850.service: Deactivated successfully. Jan 29 11:50:34.055247 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:50:34.059588 systemd-logind[1995]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:50:34.062160 systemd-logind[1995]: Removed session 2. Jan 29 11:50:34.087142 systemd[1]: Started sshd@2-172.31.25.252:22-139.178.89.65:37856.service - OpenSSH per-connection server daemon (139.178.89.65:37856). Jan 29 11:50:34.093027 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [Registrar] Starting registrar module Jan 29 11:50:34.093027 amazon-ssm-agent[2197]: 2025-01-29 11:50:32 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 11:50:34.093214 amazon-ssm-agent[2197]: 2025-01-29 11:50:34 INFO [EC2Identity] EC2 registration was successful. Jan 29 11:50:34.093214 amazon-ssm-agent[2197]: 2025-01-29 11:50:34 INFO [CredentialRefresher] credentialRefresher has started Jan 29 11:50:34.093214 amazon-ssm-agent[2197]: 2025-01-29 11:50:34 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 11:50:34.093214 amazon-ssm-agent[2197]: 2025-01-29 11:50:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 11:50:34.115212 amazon-ssm-agent[2197]: 2025-01-29 11:50:34 INFO [CredentialRefresher] Next credential rotation will be in 31.5999861572 minutes Jan 29 11:50:34.268008 sshd[2249]: Accepted publickey for core from 139.178.89.65 port 37856 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:34.270843 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:34.282411 systemd-logind[1995]: New session 3 of user core. Jan 29 11:50:34.287625 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:50:34.420006 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:34.426664 systemd[1]: sshd@2-172.31.25.252:22-139.178.89.65:37856.service: Deactivated successfully. Jan 29 11:50:34.430456 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:50:34.434234 systemd-logind[1995]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:50:34.440601 systemd-logind[1995]: Removed session 3. Jan 29 11:50:34.475573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:50:34.478567 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:50:34.481801 systemd[1]: Startup finished in 1.203s (kernel) + 8.555s (initrd) + 9.169s (userspace) = 18.929s. Jan 29 11:50:34.499848 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:50:35.124829 amazon-ssm-agent[2197]: 2025-01-29 11:50:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 11:50:35.128570 ntpd[1987]: Listen normally on 7 eth0 [fe80::427:23ff:feb0:f78f%2]:123 Jan 29 11:50:35.131521 ntpd[1987]: 29 Jan 11:50:35 ntpd[1987]: Listen normally on 7 eth0 [fe80::427:23ff:feb0:f78f%2]:123 Jan 29 11:50:35.226578 amazon-ssm-agent[2197]: 2025-01-29 11:50:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2272) started Jan 29 11:50:35.328718 amazon-ssm-agent[2197]: 2025-01-29 11:50:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 11:50:35.428052 kubelet[2260]: E0129 11:50:35.427899 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:50:35.432605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:50:35.433161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:50:35.433918 systemd[1]: kubelet.service: Consumed 1.309s CPU time. Jan 29 11:50:37.985182 systemd-resolved[1932]: Clock change detected. Flushing caches. Jan 29 11:50:44.318539 systemd[1]: Started sshd@3-172.31.25.252:22-139.178.89.65:52892.service - OpenSSH per-connection server daemon (139.178.89.65:52892). Jan 29 11:50:44.488393 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 52892 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:44.491253 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:44.499577 systemd-logind[1995]: New session 4 of user core. Jan 29 11:50:44.512435 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:50:44.641236 sshd[2284]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:44.647859 systemd[1]: sshd@3-172.31.25.252:22-139.178.89.65:52892.service: Deactivated successfully. Jan 29 11:50:44.652437 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:50:44.654009 systemd-logind[1995]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:50:44.655904 systemd-logind[1995]: Removed session 4. Jan 29 11:50:44.680291 systemd[1]: Started sshd@4-172.31.25.252:22-139.178.89.65:52894.service - OpenSSH per-connection server daemon (139.178.89.65:52894). Jan 29 11:50:44.871436 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 52894 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:44.874451 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:44.884352 systemd-logind[1995]: New session 5 of user core. Jan 29 11:50:44.887385 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:50:45.008940 sshd[2291]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:45.014409 systemd[1]: sshd@4-172.31.25.252:22-139.178.89.65:52894.service: Deactivated successfully. Jan 29 11:50:45.017354 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:50:45.020471 systemd-logind[1995]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:50:45.022713 systemd-logind[1995]: Removed session 5. Jan 29 11:50:45.043529 systemd[1]: Started sshd@5-172.31.25.252:22-139.178.89.65:52896.service - OpenSSH per-connection server daemon (139.178.89.65:52896). Jan 29 11:50:45.227971 sshd[2298]: Accepted publickey for core from 139.178.89.65 port 52896 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:45.230665 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:45.238643 systemd-logind[1995]: New session 6 of user core. Jan 29 11:50:45.250426 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:50:45.355644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:50:45.374759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:50:45.381389 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:45.388017 systemd[1]: sshd@5-172.31.25.252:22-139.178.89.65:52896.service: Deactivated successfully. Jan 29 11:50:45.397570 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:50:45.402192 systemd-logind[1995]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:50:45.421671 systemd[1]: Started sshd@6-172.31.25.252:22-139.178.89.65:52910.service - OpenSSH per-connection server daemon (139.178.89.65:52910). Jan 29 11:50:45.424387 systemd-logind[1995]: Removed session 6. Jan 29 11:50:45.600188 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 52910 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:45.603885 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:45.611819 systemd-logind[1995]: New session 7 of user core. Jan 29 11:50:45.627412 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:50:45.718415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:50:45.720949 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:50:45.794210 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:50:45.794911 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:50:45.813579 sudo[2317]: pam_unix(sudo:session): session closed for user root Jan 29 11:50:45.816981 kubelet[2316]: E0129 11:50:45.816468 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:50:45.824202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:50:45.824537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:50:45.838538 sshd[2308]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:45.844472 systemd[1]: sshd@6-172.31.25.252:22-139.178.89.65:52910.service: Deactivated successfully. Jan 29 11:50:45.849465 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:50:45.853115 systemd-logind[1995]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:50:45.855330 systemd-logind[1995]: Removed session 7. Jan 29 11:50:45.876348 systemd[1]: Started sshd@7-172.31.25.252:22-139.178.89.65:52916.service - OpenSSH per-connection server daemon (139.178.89.65:52916). Jan 29 11:50:46.064415 sshd[2328]: Accepted publickey for core from 139.178.89.65 port 52916 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:46.067238 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:46.075719 systemd-logind[1995]: New session 8 of user core. Jan 29 11:50:46.084419 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:50:46.191372 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:50:46.192674 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:50:46.199532 sudo[2332]: pam_unix(sudo:session): session closed for user root Jan 29 11:50:46.210822 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:50:46.212170 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:50:46.235634 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:50:46.252012 auditctl[2335]: No rules Jan 29 11:50:46.252860 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:50:46.253297 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:50:46.264139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:50:46.316510 augenrules[2353]: No rules Jan 29 11:50:46.318429 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:50:46.321800 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 29 11:50:46.347027 sshd[2328]: pam_unix(sshd:session): session closed for user core Jan 29 11:50:46.352297 systemd[1]: sshd@7-172.31.25.252:22-139.178.89.65:52916.service: Deactivated successfully. Jan 29 11:50:46.355714 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:50:46.358944 systemd-logind[1995]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:50:46.361842 systemd-logind[1995]: Removed session 8. Jan 29 11:50:46.383624 systemd[1]: Started sshd@8-172.31.25.252:22-139.178.89.65:52926.service - OpenSSH per-connection server daemon (139.178.89.65:52926). Jan 29 11:50:46.553330 sshd[2361]: Accepted publickey for core from 139.178.89.65 port 52926 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:50:46.556313 sshd[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:50:46.564448 systemd-logind[1995]: New session 9 of user core. Jan 29 11:50:46.575382 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:50:46.679435 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:50:46.680139 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:50:47.246589 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:50:47.250604 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:50:47.743194 dockerd[2381]: time="2025-01-29T11:50:47.742997956Z" level=info msg="Starting up" Jan 29 11:50:47.902756 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3229707466-merged.mount: Deactivated successfully. Jan 29 11:50:47.935224 dockerd[2381]: time="2025-01-29T11:50:47.935131745Z" level=info msg="Loading containers: start." Jan 29 11:50:48.083152 kernel: Initializing XFRM netlink socket Jan 29 11:50:48.116039 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:50:48.213208 systemd-networkd[1931]: docker0: Link UP Jan 29 11:50:48.239617 dockerd[2381]: time="2025-01-29T11:50:48.239555918Z" level=info msg="Loading containers: done." Jan 29 11:50:48.264768 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4103476706-merged.mount: Deactivated successfully. Jan 29 11:50:48.267801 dockerd[2381]: time="2025-01-29T11:50:48.267725559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:50:48.267955 dockerd[2381]: time="2025-01-29T11:50:48.267899019Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:50:48.268238 dockerd[2381]: time="2025-01-29T11:50:48.268194363Z" level=info msg="Daemon has completed initialization" Jan 29 11:50:48.332322 dockerd[2381]: time="2025-01-29T11:50:48.332203371Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:50:48.333906 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:50:49.289596 containerd[2021]: time="2025-01-29T11:50:49.289190308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:50:49.919989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853554597.mount: Deactivated successfully. Jan 29 11:50:51.262019 containerd[2021]: time="2025-01-29T11:50:51.261914393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:51.264776 containerd[2021]: time="2025-01-29T11:50:51.264362741Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220948" Jan 29 11:50:51.265878 containerd[2021]: time="2025-01-29T11:50:51.265756061Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:51.272438 containerd[2021]: time="2025-01-29T11:50:51.272326529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:51.275539 containerd[2021]: time="2025-01-29T11:50:51.275040750Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.985777578s" Jan 29 11:50:51.275539 containerd[2021]: time="2025-01-29T11:50:51.275167842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 11:50:51.276281 containerd[2021]: time="2025-01-29T11:50:51.276232374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:50:52.634530 containerd[2021]: time="2025-01-29T11:50:52.634437656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:52.636855 containerd[2021]: time="2025-01-29T11:50:52.636787976Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527107" Jan 29 11:50:52.637496 containerd[2021]: time="2025-01-29T11:50:52.637285568Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:52.645483 containerd[2021]: time="2025-01-29T11:50:52.645375836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:52.648055 containerd[2021]: time="2025-01-29T11:50:52.647843804Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.371375462s" Jan 29 11:50:52.648055 containerd[2021]: time="2025-01-29T11:50:52.647913272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 11:50:52.649204 containerd[2021]: time="2025-01-29T11:50:52.649136900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:50:53.812233 containerd[2021]: time="2025-01-29T11:50:53.812139622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:53.814558 containerd[2021]: time="2025-01-29T11:50:53.814464118Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481113" Jan 29 11:50:53.814952 containerd[2021]: time="2025-01-29T11:50:53.814854226Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:53.821311 containerd[2021]: time="2025-01-29T11:50:53.821211706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:53.823920 containerd[2021]: time="2025-01-29T11:50:53.823848802Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.174639914s" Jan 29 11:50:53.824332 containerd[2021]: time="2025-01-29T11:50:53.824160670Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 11:50:53.825484 containerd[2021]: time="2025-01-29T11:50:53.825177994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:50:55.121687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433575334.mount: Deactivated successfully. Jan 29 11:50:55.684435 containerd[2021]: time="2025-01-29T11:50:55.684374975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:55.687095 containerd[2021]: time="2025-01-29T11:50:55.686270567Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:55.687095 containerd[2021]: time="2025-01-29T11:50:55.686371571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364397" Jan 29 11:50:55.690120 containerd[2021]: time="2025-01-29T11:50:55.690044099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:55.691651 containerd[2021]: time="2025-01-29T11:50:55.691593275Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.866352869s" Jan 29 11:50:55.691736 containerd[2021]: time="2025-01-29T11:50:55.691651967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 11:50:55.692347 containerd[2021]: time="2025-01-29T11:50:55.692295191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:50:56.074934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:50:56.081915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:50:56.273265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890792773.mount: Deactivated successfully. Jan 29 11:50:56.485357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:50:56.499705 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:50:56.628643 kubelet[2610]: E0129 11:50:56.625730 2610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:50:56.629398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:50:56.629696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:50:57.775010 containerd[2021]: time="2025-01-29T11:50:57.774915386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:57.777746 containerd[2021]: time="2025-01-29T11:50:57.777667838Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 29 11:50:57.780241 containerd[2021]: time="2025-01-29T11:50:57.780148706Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:57.786953 containerd[2021]: time="2025-01-29T11:50:57.786869786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:57.789570 containerd[2021]: time="2025-01-29T11:50:57.789351086Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.096997071s" Jan 29 11:50:57.789570 containerd[2021]: time="2025-01-29T11:50:57.789409802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 11:50:57.790507 containerd[2021]: time="2025-01-29T11:50:57.790316150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:50:58.283246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384032411.mount: Deactivated successfully. Jan 29 11:50:58.296117 containerd[2021]: time="2025-01-29T11:50:58.296025216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:58.298113 containerd[2021]: time="2025-01-29T11:50:58.298005816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 29 11:50:58.300857 containerd[2021]: time="2025-01-29T11:50:58.300761616Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:58.307632 containerd[2021]: time="2025-01-29T11:50:58.307518168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:50:58.309461 containerd[2021]: time="2025-01-29T11:50:58.309215928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 518.841494ms" Jan 29 11:50:58.309461 containerd[2021]: time="2025-01-29T11:50:58.309282528Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:50:58.310287 containerd[2021]: time="2025-01-29T11:50:58.310215576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:50:58.910350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319456240.mount: Deactivated successfully. Jan 29 11:51:01.177547 containerd[2021]: time="2025-01-29T11:51:01.177446463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:01.180033 containerd[2021]: time="2025-01-29T11:51:01.179943099Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Jan 29 11:51:01.182327 containerd[2021]: time="2025-01-29T11:51:01.182203011Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:01.189202 containerd[2021]: time="2025-01-29T11:51:01.189142815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:01.192445 containerd[2021]: time="2025-01-29T11:51:01.192155523Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.881871571s" Jan 29 11:51:01.192445 containerd[2021]: time="2025-01-29T11:51:01.192230319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 11:51:02.096321 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 11:51:06.748876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:51:06.757537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:51:07.073494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:07.077402 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:51:07.156137 kubelet[2746]: E0129 11:51:07.154952 2746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:51:07.161374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:51:07.161682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:51:10.662365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:10.670614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:51:10.735320 systemd[1]: Reloading requested from client PID 2760 ('systemctl') (unit session-9.scope)... Jan 29 11:51:10.735367 systemd[1]: Reloading... Jan 29 11:51:10.985123 zram_generator::config[2801]: No configuration found. Jan 29 11:51:11.244060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:51:11.419867 systemd[1]: Reloading finished in 683 ms. Jan 29 11:51:11.517598 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:51:11.517796 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:51:11.518452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:11.525821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:51:12.141399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:12.142554 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:51:12.229953 kubelet[2863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:51:12.229953 kubelet[2863]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:51:12.229953 kubelet[2863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:51:12.230616 kubelet[2863]: I0129 11:51:12.230166 2863 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:51:12.897742 kubelet[2863]: I0129 11:51:12.897184 2863 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:51:12.897742 kubelet[2863]: I0129 11:51:12.897237 2863 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:51:12.899256 kubelet[2863]: I0129 11:51:12.899216 2863 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:51:12.936214 kubelet[2863]: E0129 11:51:12.936153 2863 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.252:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:12.946890 kubelet[2863]: I0129 11:51:12.946611 2863 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:51:12.958775 kubelet[2863]: E0129 11:51:12.958703 2863 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:51:12.958775 kubelet[2863]: I0129 11:51:12.958762 2863 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:51:12.963668 kubelet[2863]: I0129 11:51:12.963620 2863 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:51:12.964205 kubelet[2863]: I0129 11:51:12.964151 2863 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:51:12.964487 kubelet[2863]: I0129 11:51:12.964204 2863 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-252","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:51:12.964667 kubelet[2863]: I0129 11:51:12.964521 2863 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:51:12.964667 kubelet[2863]: I0129 11:51:12.964543 2863 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:51:12.964779 kubelet[2863]: I0129 11:51:12.964760 2863 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:51:12.970953 kubelet[2863]: I0129 11:51:12.970887 2863 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:51:12.970953 kubelet[2863]: I0129 11:51:12.970935 2863 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:51:12.971367 kubelet[2863]: I0129 11:51:12.970972 2863 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:51:12.971367 kubelet[2863]: I0129 11:51:12.970996 2863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:51:12.979475 kubelet[2863]: W0129 11:51:12.979394 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:12.979638 kubelet[2863]: E0129 11:51:12.979491 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:12.982146 kubelet[2863]: W0129 11:51:12.980100 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-252&limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:12.982146 kubelet[2863]: E0129 11:51:12.980187 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-252&limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:12.982146 kubelet[2863]: I0129 11:51:12.980501 2863 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:51:12.982146 kubelet[2863]: I0129 11:51:12.981330 2863 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:51:12.982146 kubelet[2863]: W0129 11:51:12.981450 2863 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:51:12.983593 kubelet[2863]: I0129 11:51:12.983538 2863 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:51:12.983734 kubelet[2863]: I0129 11:51:12.983618 2863 server.go:1287] "Started kubelet" Jan 29 11:51:12.994398 kubelet[2863]: E0129 11:51:12.994123 2863 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.252:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.252:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-252.181f278e39020679 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-252,UID:ip-172-31-25-252,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-252,},FirstTimestamp:2025-01-29 11:51:12.983574137 +0000 UTC m=+0.829148417,LastTimestamp:2025-01-29 11:51:12.983574137 +0000 UTC m=+0.829148417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-252,}" Jan 29 11:51:12.996388 kubelet[2863]: I0129 11:51:12.996257 2863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:51:12.997258 kubelet[2863]: I0129 11:51:12.997226 2863 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:51:12.999128 kubelet[2863]: I0129 11:51:12.997487 2863 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:51:12.999128 kubelet[2863]: I0129 11:51:12.998005 2863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:51:13.000988 kubelet[2863]: I0129 11:51:13.000939 2863 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:51:13.005918 kubelet[2863]: I0129 11:51:13.005856 2863 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:51:13.008934 kubelet[2863]: I0129 11:51:13.008884 2863 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:51:13.009387 kubelet[2863]: E0129 11:51:13.009338 2863 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-252\" not found" Jan 29 11:51:13.013247 kubelet[2863]: I0129 11:51:13.013101 2863 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:51:13.014179 kubelet[2863]: E0129 11:51:13.013773 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-252?timeout=10s\": dial tcp 172.31.25.252:6443: connect: connection refused" interval="200ms" Jan 29 11:51:13.015652 kubelet[2863]: I0129 11:51:13.015615 2863 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:51:13.015859 kubelet[2863]: I0129 11:51:13.015838 2863 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:51:13.017714 kubelet[2863]: E0129 11:51:13.017649 2863 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:51:13.017836 kubelet[2863]: I0129 11:51:13.017762 2863 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:51:13.017911 kubelet[2863]: I0129 11:51:13.017841 2863 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:51:13.038557 kubelet[2863]: I0129 11:51:13.038485 2863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:51:13.040751 kubelet[2863]: I0129 11:51:13.040682 2863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:51:13.040751 kubelet[2863]: I0129 11:51:13.040732 2863 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:51:13.040960 kubelet[2863]: I0129 11:51:13.040771 2863 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:51:13.040960 kubelet[2863]: I0129 11:51:13.040789 2863 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:51:13.040960 kubelet[2863]: E0129 11:51:13.040860 2863 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:51:13.049827 kubelet[2863]: W0129 11:51:13.049599 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:13.051312 kubelet[2863]: E0129 11:51:13.051262 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:13.053763 kubelet[2863]: W0129 11:51:13.053691 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:13.054343 kubelet[2863]: E0129 11:51:13.053960 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:13.065307 kubelet[2863]: I0129 11:51:13.065254 2863 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:51:13.065307 kubelet[2863]: I0129 11:51:13.065289 2863 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:51:13.065497 kubelet[2863]: I0129 11:51:13.065322 2863 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:51:13.070253 kubelet[2863]: I0129 11:51:13.070200 2863 policy_none.go:49] "None policy: Start" Jan 29 11:51:13.070253 kubelet[2863]: I0129 11:51:13.070245 2863 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:51:13.070444 kubelet[2863]: I0129 11:51:13.070271 2863 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:51:13.083945 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:51:13.097352 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:51:13.104604 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:51:13.110368 kubelet[2863]: E0129 11:51:13.110316 2863 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-252\" not found" Jan 29 11:51:13.116105 kubelet[2863]: I0129 11:51:13.115123 2863 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:51:13.116105 kubelet[2863]: I0129 11:51:13.115432 2863 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:51:13.116105 kubelet[2863]: I0129 11:51:13.115453 2863 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:51:13.116105 kubelet[2863]: I0129 11:51:13.115787 2863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:51:13.118736 kubelet[2863]: E0129 11:51:13.118686 2863 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:51:13.118908 kubelet[2863]: E0129 11:51:13.118754 2863 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-252\" not found" Jan 29 11:51:13.159933 systemd[1]: Created slice kubepods-burstable-podfd3490071e4fdf80f44df2001815066f.slice - libcontainer container kubepods-burstable-podfd3490071e4fdf80f44df2001815066f.slice. Jan 29 11:51:13.172210 kubelet[2863]: E0129 11:51:13.171793 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:13.180816 systemd[1]: Created slice kubepods-burstable-pod9e4b26049607ae90577fe1fe21011557.slice - libcontainer container kubepods-burstable-pod9e4b26049607ae90577fe1fe21011557.slice. Jan 29 11:51:13.196298 kubelet[2863]: E0129 11:51:13.196248 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:13.200677 systemd[1]: Created slice kubepods-burstable-pod7ca1547dc0c2a69d9a3850ec6c26b12d.slice - libcontainer container kubepods-burstable-pod7ca1547dc0c2a69d9a3850ec6c26b12d.slice. Jan 29 11:51:13.204653 kubelet[2863]: E0129 11:51:13.204595 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:13.214803 kubelet[2863]: E0129 11:51:13.214729 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-252?timeout=10s\": dial tcp 172.31.25.252:6443: connect: connection refused" interval="400ms" Jan 29 11:51:13.217914 kubelet[2863]: I0129 11:51:13.217867 2863 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-252" Jan 29 11:51:13.218544 kubelet[2863]: E0129 11:51:13.218424 2863 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.252:6443/api/v1/nodes\": dial tcp 172.31.25.252:6443: connect: connection refused" node="ip-172-31-25-252" Jan 29 11:51:13.319335 kubelet[2863]: I0129 11:51:13.319168 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:13.319335 kubelet[2863]: I0129 11:51:13.319229 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:13.319335 kubelet[2863]: I0129 11:51:13.319273 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:13.319335 kubelet[2863]: I0129 11:51:13.319325 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e4b26049607ae90577fe1fe21011557-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-252\" (UID: \"9e4b26049607ae90577fe1fe21011557\") " pod="kube-system/kube-scheduler-ip-172-31-25-252" Jan 29 11:51:13.320105 kubelet[2863]: I0129 11:51:13.319364 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ca1547dc0c2a69d9a3850ec6c26b12d-ca-certs\") pod \"kube-apiserver-ip-172-31-25-252\" (UID: \"7ca1547dc0c2a69d9a3850ec6c26b12d\") " pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:13.320105 kubelet[2863]: I0129 11:51:13.319403 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ca1547dc0c2a69d9a3850ec6c26b12d-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-252\" (UID: \"7ca1547dc0c2a69d9a3850ec6c26b12d\") " pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:13.320105 kubelet[2863]: I0129 11:51:13.319438 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:13.320105 kubelet[2863]: I0129 11:51:13.319475 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ca1547dc0c2a69d9a3850ec6c26b12d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-252\" (UID: \"7ca1547dc0c2a69d9a3850ec6c26b12d\") " pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:13.320105 kubelet[2863]: I0129 11:51:13.319511 2863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:13.421160 kubelet[2863]: I0129 11:51:13.420920 2863 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-252" Jan 29 11:51:13.421591 kubelet[2863]: E0129 11:51:13.421511 2863 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.252:6443/api/v1/nodes\": dial tcp 172.31.25.252:6443: connect: connection refused" node="ip-172-31-25-252" Jan 29 11:51:13.474463 containerd[2021]: time="2025-01-29T11:51:13.474050068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-252,Uid:fd3490071e4fdf80f44df2001815066f,Namespace:kube-system,Attempt:0,}" Jan 29 11:51:13.498108 containerd[2021]: time="2025-01-29T11:51:13.498004420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-252,Uid:9e4b26049607ae90577fe1fe21011557,Namespace:kube-system,Attempt:0,}" Jan 29 11:51:13.510595 containerd[2021]: time="2025-01-29T11:51:13.510347512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-252,Uid:7ca1547dc0c2a69d9a3850ec6c26b12d,Namespace:kube-system,Attempt:0,}" Jan 29 11:51:13.615445 kubelet[2863]: E0129 11:51:13.615393 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-252?timeout=10s\": dial tcp 172.31.25.252:6443: connect: connection refused" interval="800ms" Jan 29 11:51:13.824438 kubelet[2863]: I0129 11:51:13.824381 2863 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-252" Jan 29 11:51:13.824992 kubelet[2863]: E0129 11:51:13.824947 2863 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.252:6443/api/v1/nodes\": dial tcp 172.31.25.252:6443: connect: connection refused" node="ip-172-31-25-252" Jan 29 11:51:13.905742 kubelet[2863]: W0129 11:51:13.905660 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-252&limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:13.906165 kubelet[2863]: E0129 11:51:13.905762 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.252:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-252&limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:14.026933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638647080.mount: Deactivated successfully. Jan 29 11:51:14.046209 containerd[2021]: time="2025-01-29T11:51:14.046054359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:51:14.048573 containerd[2021]: time="2025-01-29T11:51:14.048481947Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:51:14.051028 containerd[2021]: time="2025-01-29T11:51:14.050946843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 11:51:14.052866 containerd[2021]: time="2025-01-29T11:51:14.052660035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:51:14.055924 containerd[2021]: time="2025-01-29T11:51:14.054871479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:51:14.058033 containerd[2021]: time="2025-01-29T11:51:14.057807891Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:51:14.059788 containerd[2021]: time="2025-01-29T11:51:14.059388411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:51:14.064954 containerd[2021]: time="2025-01-29T11:51:14.064886067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:51:14.069618 containerd[2021]: time="2025-01-29T11:51:14.069549687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.390983ms" Jan 29 11:51:14.074376 containerd[2021]: time="2025-01-29T11:51:14.074279307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.094551ms" Jan 29 11:51:14.089362 containerd[2021]: time="2025-01-29T11:51:14.089022723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.561283ms" Jan 29 11:51:14.211459 kubelet[2863]: W0129 11:51:14.211310 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:14.211459 kubelet[2863]: E0129 11:51:14.211389 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.252:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:14.286854 containerd[2021]: time="2025-01-29T11:51:14.286598800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:14.290376 containerd[2021]: time="2025-01-29T11:51:14.287957920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:14.290376 containerd[2021]: time="2025-01-29T11:51:14.288058492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:14.290376 containerd[2021]: time="2025-01-29T11:51:14.288376732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:14.290376 containerd[2021]: time="2025-01-29T11:51:14.289940020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:14.290376 containerd[2021]: time="2025-01-29T11:51:14.290060596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:14.291919 containerd[2021]: time="2025-01-29T11:51:14.290182504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:14.293243 containerd[2021]: time="2025-01-29T11:51:14.292494748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:14.293524 containerd[2021]: time="2025-01-29T11:51:14.292652836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:14.293524 containerd[2021]: time="2025-01-29T11:51:14.292757092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:14.293524 containerd[2021]: time="2025-01-29T11:51:14.292797868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:14.293524 containerd[2021]: time="2025-01-29T11:51:14.292997992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:14.349532 systemd[1]: Started cri-containerd-f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5.scope - libcontainer container f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5. Jan 29 11:51:14.364506 systemd[1]: Started cri-containerd-b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6.scope - libcontainer container b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6. Jan 29 11:51:14.385996 systemd[1]: Started cri-containerd-47586099fcb5f2812f5639e5ce1b196bef23580942b662b1592b41f246444a3e.scope - libcontainer container 47586099fcb5f2812f5639e5ce1b196bef23580942b662b1592b41f246444a3e. Jan 29 11:51:14.417434 kubelet[2863]: E0129 11:51:14.417258 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-252?timeout=10s\": dial tcp 172.31.25.252:6443: connect: connection refused" interval="1.6s" Jan 29 11:51:14.423133 kubelet[2863]: W0129 11:51:14.422927 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:14.423133 kubelet[2863]: E0129 11:51:14.423026 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.252:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:14.484879 containerd[2021]: time="2025-01-29T11:51:14.484638101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-252,Uid:fd3490071e4fdf80f44df2001815066f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5\"" Jan 29 11:51:14.500636 containerd[2021]: time="2025-01-29T11:51:14.500413817Z" level=info msg="CreateContainer within sandbox \"f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:51:14.513250 containerd[2021]: time="2025-01-29T11:51:14.511591193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-252,Uid:7ca1547dc0c2a69d9a3850ec6c26b12d,Namespace:kube-system,Attempt:0,} returns sandbox id \"47586099fcb5f2812f5639e5ce1b196bef23580942b662b1592b41f246444a3e\"" Jan 29 11:51:14.531839 containerd[2021]: time="2025-01-29T11:51:14.531742721Z" level=info msg="CreateContainer within sandbox \"47586099fcb5f2812f5639e5ce1b196bef23580942b662b1592b41f246444a3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:51:14.561061 kubelet[2863]: W0129 11:51:14.560839 2863 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.252:6443: connect: connection refused Jan 29 11:51:14.561061 kubelet[2863]: E0129 11:51:14.560953 2863 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.252:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.252:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:51:14.562399 containerd[2021]: time="2025-01-29T11:51:14.561607205Z" level=info msg="CreateContainer within sandbox \"f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4\"" Jan 29 11:51:14.563785 containerd[2021]: time="2025-01-29T11:51:14.563724869Z" level=info msg="StartContainer for \"b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4\"" Jan 29 11:51:14.570799 containerd[2021]: time="2025-01-29T11:51:14.570729581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-252,Uid:9e4b26049607ae90577fe1fe21011557,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6\"" Jan 29 11:51:14.576554 containerd[2021]: time="2025-01-29T11:51:14.576435845Z" level=info msg="CreateContainer within sandbox \"b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:51:14.596830 containerd[2021]: time="2025-01-29T11:51:14.596565917Z" level=info msg="CreateContainer within sandbox \"47586099fcb5f2812f5639e5ce1b196bef23580942b662b1592b41f246444a3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"431d3c4b204427d5325e1c6cc9326652d66d10b68b6efe1cd43e7fb06df1c0f7\"" Jan 29 11:51:14.598226 containerd[2021]: time="2025-01-29T11:51:14.597438077Z" level=info msg="StartContainer for \"431d3c4b204427d5325e1c6cc9326652d66d10b68b6efe1cd43e7fb06df1c0f7\"" Jan 29 11:51:14.619863 containerd[2021]: time="2025-01-29T11:51:14.619687577Z" level=info msg="CreateContainer within sandbox \"b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e\"" Jan 29 11:51:14.622872 containerd[2021]: time="2025-01-29T11:51:14.622608209Z" level=info msg="StartContainer for \"a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e\"" Jan 29 11:51:14.629629 kubelet[2863]: I0129 11:51:14.629588 2863 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-252" Jan 29 11:51:14.631169 kubelet[2863]: E0129 11:51:14.631069 2863 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.252:6443/api/v1/nodes\": dial tcp 172.31.25.252:6443: connect: connection refused" node="ip-172-31-25-252" Jan 29 11:51:14.653304 systemd[1]: Started cri-containerd-b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4.scope - libcontainer container b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4. Jan 29 11:51:14.683598 systemd[1]: Started cri-containerd-431d3c4b204427d5325e1c6cc9326652d66d10b68b6efe1cd43e7fb06df1c0f7.scope - libcontainer container 431d3c4b204427d5325e1c6cc9326652d66d10b68b6efe1cd43e7fb06df1c0f7. Jan 29 11:51:14.720470 systemd[1]: Started cri-containerd-a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e.scope - libcontainer container a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e. Jan 29 11:51:14.809007 containerd[2021]: time="2025-01-29T11:51:14.808709010Z" level=info msg="StartContainer for \"b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4\" returns successfully" Jan 29 11:51:14.829788 containerd[2021]: time="2025-01-29T11:51:14.829675351Z" level=info msg="StartContainer for \"431d3c4b204427d5325e1c6cc9326652d66d10b68b6efe1cd43e7fb06df1c0f7\" returns successfully" Jan 29 11:51:14.888583 containerd[2021]: time="2025-01-29T11:51:14.888397075Z" level=info msg="StartContainer for \"a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e\" returns successfully" Jan 29 11:51:15.076283 kubelet[2863]: E0129 11:51:15.075919 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:15.085288 kubelet[2863]: E0129 11:51:15.084780 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:15.093217 kubelet[2863]: E0129 11:51:15.092639 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:16.096469 kubelet[2863]: E0129 11:51:16.096354 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:16.099675 kubelet[2863]: E0129 11:51:16.099431 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:16.238135 kubelet[2863]: I0129 11:51:16.235170 2863 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-252" Jan 29 11:51:16.411291 update_engine[1997]: I20250129 11:51:16.411119 1997 update_attempter.cc:509] Updating boot flags... Jan 29 11:51:16.572191 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3150) Jan 29 11:51:17.031139 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3141) Jan 29 11:51:17.107189 kubelet[2863]: E0129 11:51:17.104029 2863 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:17.577097 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3141) Jan 29 11:51:19.252339 kubelet[2863]: E0129 11:51:19.252271 2863 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-252\" not found" node="ip-172-31-25-252" Jan 29 11:51:19.360103 kubelet[2863]: I0129 11:51:19.357817 2863 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-252" Jan 29 11:51:19.412397 kubelet[2863]: I0129 11:51:19.412349 2863 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:19.429287 kubelet[2863]: E0129 11:51:19.428820 2863 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-252.181f278e39020679 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-252,UID:ip-172-31-25-252,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-252,},FirstTimestamp:2025-01-29 11:51:12.983574137 +0000 UTC m=+0.829148417,LastTimestamp:2025-01-29 11:51:12.983574137 +0000 UTC m=+0.829148417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-252,}" Jan 29 11:51:19.439112 kubelet[2863]: E0129 11:51:19.439013 2863 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-252\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:19.439112 kubelet[2863]: I0129 11:51:19.439057 2863 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-252" Jan 29 11:51:19.446391 kubelet[2863]: E0129 11:51:19.446325 2863 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-252\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-252" Jan 29 11:51:19.446391 kubelet[2863]: I0129 11:51:19.446383 2863 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:19.452504 kubelet[2863]: E0129 11:51:19.452443 2863 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-252\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:19.980777 kubelet[2863]: I0129 11:51:19.980707 2863 apiserver.go:52] "Watching apiserver" Jan 29 11:51:20.018794 kubelet[2863]: I0129 11:51:20.018723 2863 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:51:20.570728 kubelet[2863]: I0129 11:51:20.570669 2863 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:21.639862 systemd[1]: Reloading requested from client PID 3405 ('systemctl') (unit session-9.scope)... Jan 29 11:51:21.639901 systemd[1]: Reloading... Jan 29 11:51:21.858189 zram_generator::config[3457]: No configuration found. Jan 29 11:51:22.075148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:51:22.276851 systemd[1]: Reloading finished in 636 ms. Jan 29 11:51:22.354384 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:51:22.369219 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:51:22.369646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:22.369729 systemd[1]: kubelet.service: Consumed 1.672s CPU time, 125.3M memory peak, 0B memory swap peak. Jan 29 11:51:22.377725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:51:22.693615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:22.713717 (kubelet)[3506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:51:22.810307 kubelet[3506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:51:22.810307 kubelet[3506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:51:22.810798 kubelet[3506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:51:22.810798 kubelet[3506]: I0129 11:51:22.810498 3506 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:51:22.822363 kubelet[3506]: I0129 11:51:22.822126 3506 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:51:22.822363 kubelet[3506]: I0129 11:51:22.822177 3506 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:51:22.822727 kubelet[3506]: I0129 11:51:22.822628 3506 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:51:22.828606 kubelet[3506]: I0129 11:51:22.828388 3506 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:51:22.839536 kubelet[3506]: I0129 11:51:22.839231 3506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:51:22.848144 kubelet[3506]: E0129 11:51:22.847869 3506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:51:22.848144 kubelet[3506]: I0129 11:51:22.848048 3506 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:51:22.854464 kubelet[3506]: I0129 11:51:22.854263 3506 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:51:22.855550 kubelet[3506]: I0129 11:51:22.854675 3506 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:51:22.855550 kubelet[3506]: I0129 11:51:22.854719 3506 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-252","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:51:22.855550 kubelet[3506]: I0129 11:51:22.855004 3506 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:51:22.855550 kubelet[3506]: I0129 11:51:22.855024 3506 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:51:22.855879 kubelet[3506]: I0129 11:51:22.855146 3506 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:51:22.855879 kubelet[3506]: I0129 11:51:22.855373 3506 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:51:22.855879 kubelet[3506]: I0129 11:51:22.855399 3506 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:51:22.855879 kubelet[3506]: I0129 11:51:22.855432 3506 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:51:22.855879 kubelet[3506]: I0129 11:51:22.855453 3506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:51:22.862514 kubelet[3506]: I0129 11:51:22.860031 3506 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:51:22.862514 kubelet[3506]: I0129 11:51:22.860859 3506 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:51:22.863364 kubelet[3506]: I0129 11:51:22.863319 3506 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:51:22.864947 kubelet[3506]: I0129 11:51:22.864852 3506 server.go:1287] "Started kubelet" Jan 29 11:51:22.871846 kubelet[3506]: I0129 11:51:22.870497 3506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:51:22.882153 kubelet[3506]: I0129 11:51:22.879409 3506 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:51:22.886998 kubelet[3506]: I0129 11:51:22.886947 3506 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:51:22.887659 kubelet[3506]: E0129 11:51:22.887618 3506 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-252\" not found" Jan 29 11:51:22.901121 kubelet[3506]: I0129 11:51:22.900287 3506 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:51:22.901121 kubelet[3506]: I0129 11:51:22.900555 3506 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:51:22.901814 kubelet[3506]: I0129 11:51:22.901729 3506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:51:22.902400 kubelet[3506]: I0129 11:51:22.902371 3506 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:51:22.902598 kubelet[3506]: I0129 11:51:22.901868 3506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:51:22.928322 kubelet[3506]: I0129 11:51:22.928215 3506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:51:22.932285 kubelet[3506]: I0129 11:51:22.932237 3506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:51:22.932489 kubelet[3506]: I0129 11:51:22.932469 3506 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:51:22.932623 kubelet[3506]: I0129 11:51:22.932601 3506 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:51:22.932723 kubelet[3506]: I0129 11:51:22.932705 3506 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:51:22.932944 kubelet[3506]: E0129 11:51:22.932900 3506 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:51:22.934103 kubelet[3506]: I0129 11:51:22.932931 3506 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:51:22.934103 kubelet[3506]: I0129 11:51:22.933178 3506 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:51:22.934103 kubelet[3506]: I0129 11:51:22.933321 3506 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:51:22.967138 kubelet[3506]: I0129 11:51:22.966420 3506 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:51:22.984338 kubelet[3506]: E0129 11:51:22.984286 3506 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:51:23.033285 kubelet[3506]: E0129 11:51:23.033233 3506 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071323 3506 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071358 3506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071396 3506 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071667 3506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071687 3506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071721 3506 policy_none.go:49] "None policy: Start" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071740 3506 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071760 3506 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:51:23.072038 kubelet[3506]: I0129 11:51:23.071938 3506 state_mem.go:75] "Updated machine memory state" Jan 29 11:51:23.084143 kubelet[3506]: I0129 11:51:23.084067 3506 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:51:23.084422 kubelet[3506]: I0129 11:51:23.084388 3506 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:51:23.084602 kubelet[3506]: I0129 11:51:23.084418 3506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:51:23.087364 kubelet[3506]: I0129 11:51:23.087025 3506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:51:23.088425 kubelet[3506]: E0129 11:51:23.087822 3506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:51:23.205204 kubelet[3506]: I0129 11:51:23.205153 3506 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-252" Jan 29 11:51:23.219821 kubelet[3506]: I0129 11:51:23.219183 3506 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-25-252" Jan 29 11:51:23.219821 kubelet[3506]: I0129 11:51:23.219309 3506 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-252" Jan 29 11:51:23.238068 kubelet[3506]: I0129 11:51:23.237713 3506 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:23.239618 kubelet[3506]: I0129 11:51:23.239570 3506 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.242416 kubelet[3506]: I0129 11:51:23.242322 3506 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-252" Jan 29 11:51:23.254429 kubelet[3506]: E0129 11:51:23.254255 3506 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-252\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.302434 kubelet[3506]: I0129 11:51:23.302199 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ca1547dc0c2a69d9a3850ec6c26b12d-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-252\" (UID: \"7ca1547dc0c2a69d9a3850ec6c26b12d\") " pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:23.303442 kubelet[3506]: I0129 11:51:23.302911 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.303442 kubelet[3506]: I0129 11:51:23.303605 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e4b26049607ae90577fe1fe21011557-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-252\" (UID: \"9e4b26049607ae90577fe1fe21011557\") " pod="kube-system/kube-scheduler-ip-172-31-25-252" Jan 29 11:51:23.304718 kubelet[3506]: I0129 11:51:23.304313 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ca1547dc0c2a69d9a3850ec6c26b12d-ca-certs\") pod \"kube-apiserver-ip-172-31-25-252\" (UID: \"7ca1547dc0c2a69d9a3850ec6c26b12d\") " pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:23.305301 kubelet[3506]: I0129 11:51:23.304528 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.306468 kubelet[3506]: I0129 11:51:23.305483 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ca1547dc0c2a69d9a3850ec6c26b12d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-252\" (UID: \"7ca1547dc0c2a69d9a3850ec6c26b12d\") " pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:23.307199 kubelet[3506]: I0129 11:51:23.306557 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.308536 kubelet[3506]: I0129 11:51:23.307162 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.308536 kubelet[3506]: I0129 11:51:23.308479 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd3490071e4fdf80f44df2001815066f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-252\" (UID: \"fd3490071e4fdf80f44df2001815066f\") " pod="kube-system/kube-controller-manager-ip-172-31-25-252" Jan 29 11:51:23.857818 kubelet[3506]: I0129 11:51:23.857738 3506 apiserver.go:52] "Watching apiserver" Jan 29 11:51:23.901129 kubelet[3506]: I0129 11:51:23.901043 3506 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:51:24.020553 kubelet[3506]: I0129 11:51:24.020505 3506 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:24.109095 kubelet[3506]: E0129 11:51:24.108680 3506 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-252\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-252" Jan 29 11:51:24.277174 kubelet[3506]: I0129 11:51:24.277041 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-252" podStartSLOduration=4.277016677 podStartE2EDuration="4.277016677s" podCreationTimestamp="2025-01-29 11:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:51:24.214515565 +0000 UTC m=+1.492681316" watchObservedRunningTime="2025-01-29 11:51:24.277016677 +0000 UTC m=+1.555182428" Jan 29 11:51:24.331888 kubelet[3506]: I0129 11:51:24.331796 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-252" podStartSLOduration=1.3317690660000001 podStartE2EDuration="1.331769066s" podCreationTimestamp="2025-01-29 11:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:51:24.278931817 +0000 UTC m=+1.557097568" watchObservedRunningTime="2025-01-29 11:51:24.331769066 +0000 UTC m=+1.609934805" Jan 29 11:51:24.371739 kubelet[3506]: I0129 11:51:24.371543 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-252" podStartSLOduration=1.371524478 podStartE2EDuration="1.371524478s" podCreationTimestamp="2025-01-29 11:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:51:24.333057026 +0000 UTC m=+1.611222777" watchObservedRunningTime="2025-01-29 11:51:24.371524478 +0000 UTC m=+1.649690217" Jan 29 11:51:28.134233 kubelet[3506]: I0129 11:51:28.134190 3506 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:51:28.138132 containerd[2021]: time="2025-01-29T11:51:28.137126501Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:51:28.138663 kubelet[3506]: I0129 11:51:28.137707 3506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:51:28.711403 sudo[2364]: pam_unix(sudo:session): session closed for user root Jan 29 11:51:28.734891 sshd[2361]: pam_unix(sshd:session): session closed for user core Jan 29 11:51:28.747803 systemd[1]: sshd@8-172.31.25.252:22-139.178.89.65:52926.service: Deactivated successfully. Jan 29 11:51:28.756609 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:51:28.759613 systemd[1]: session-9.scope: Consumed 12.912s CPU time, 150.2M memory peak, 0B memory swap peak. Jan 29 11:51:28.770918 systemd-logind[1995]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:51:28.780889 systemd-logind[1995]: Removed session 9. Jan 29 11:51:28.792838 systemd[1]: Created slice kubepods-besteffort-poda5fa8ef9_3ecd_4b4c_aa57_8694c97768e9.slice - libcontainer container kubepods-besteffort-poda5fa8ef9_3ecd_4b4c_aa57_8694c97768e9.slice. Jan 29 11:51:28.850198 kubelet[3506]: I0129 11:51:28.850138 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9-kube-proxy\") pod \"kube-proxy-z787m\" (UID: \"a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9\") " pod="kube-system/kube-proxy-z787m" Jan 29 11:51:28.850745 kubelet[3506]: I0129 11:51:28.850538 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9-lib-modules\") pod \"kube-proxy-z787m\" (UID: \"a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9\") " pod="kube-system/kube-proxy-z787m" Jan 29 11:51:28.850745 kubelet[3506]: I0129 11:51:28.850634 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfkx\" (UniqueName: \"kubernetes.io/projected/a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9-kube-api-access-nsfkx\") pod \"kube-proxy-z787m\" (UID: \"a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9\") " pod="kube-system/kube-proxy-z787m" Jan 29 11:51:28.851030 kubelet[3506]: I0129 11:51:28.850941 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9-xtables-lock\") pod \"kube-proxy-z787m\" (UID: \"a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9\") " pod="kube-system/kube-proxy-z787m" Jan 29 11:51:29.100459 systemd[1]: Created slice kubepods-besteffort-pod8a73c85d_07a4_49bf_9f76_7607bae85a05.slice - libcontainer container kubepods-besteffort-pod8a73c85d_07a4_49bf_9f76_7607bae85a05.slice. Jan 29 11:51:29.108281 containerd[2021]: time="2025-01-29T11:51:29.108191069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z787m,Uid:a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9,Namespace:kube-system,Attempt:0,}" Jan 29 11:51:29.153202 kubelet[3506]: I0129 11:51:29.153134 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmglf\" (UniqueName: \"kubernetes.io/projected/8a73c85d-07a4-49bf-9f76-7607bae85a05-kube-api-access-tmglf\") pod \"tigera-operator-7d68577dc5-g6257\" (UID: \"8a73c85d-07a4-49bf-9f76-7607bae85a05\") " pod="tigera-operator/tigera-operator-7d68577dc5-g6257" Jan 29 11:51:29.153756 kubelet[3506]: I0129 11:51:29.153214 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a73c85d-07a4-49bf-9f76-7607bae85a05-var-lib-calico\") pod \"tigera-operator-7d68577dc5-g6257\" (UID: \"8a73c85d-07a4-49bf-9f76-7607bae85a05\") " pod="tigera-operator/tigera-operator-7d68577dc5-g6257" Jan 29 11:51:29.159670 containerd[2021]: time="2025-01-29T11:51:29.159006702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:29.159670 containerd[2021]: time="2025-01-29T11:51:29.159127218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:29.159670 containerd[2021]: time="2025-01-29T11:51:29.159154530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:29.162024 containerd[2021]: time="2025-01-29T11:51:29.159313878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:29.201803 systemd[1]: run-containerd-runc-k8s.io-025b73b5fb6e1b98f99d2e6083c65267504aee6bfe3dceea72de9b3b3b3993fc-runc.GsqrNB.mount: Deactivated successfully. Jan 29 11:51:29.218679 systemd[1]: Started cri-containerd-025b73b5fb6e1b98f99d2e6083c65267504aee6bfe3dceea72de9b3b3b3993fc.scope - libcontainer container 025b73b5fb6e1b98f99d2e6083c65267504aee6bfe3dceea72de9b3b3b3993fc. Jan 29 11:51:29.268779 containerd[2021]: time="2025-01-29T11:51:29.268659090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z787m,Uid:a5fa8ef9-3ecd-4b4c-aa57-8694c97768e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"025b73b5fb6e1b98f99d2e6083c65267504aee6bfe3dceea72de9b3b3b3993fc\"" Jan 29 11:51:29.280358 containerd[2021]: time="2025-01-29T11:51:29.280275030Z" level=info msg="CreateContainer within sandbox \"025b73b5fb6e1b98f99d2e6083c65267504aee6bfe3dceea72de9b3b3b3993fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:51:29.312471 containerd[2021]: time="2025-01-29T11:51:29.312064866Z" level=info msg="CreateContainer within sandbox \"025b73b5fb6e1b98f99d2e6083c65267504aee6bfe3dceea72de9b3b3b3993fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b385b8c5cf8d4af74388be110b1a52d221db6f6c84c66ea917b93faf17b5053\"" Jan 29 11:51:29.320852 containerd[2021]: time="2025-01-29T11:51:29.319478118Z" level=info msg="StartContainer for \"7b385b8c5cf8d4af74388be110b1a52d221db6f6c84c66ea917b93faf17b5053\"" Jan 29 11:51:29.379542 systemd[1]: Started cri-containerd-7b385b8c5cf8d4af74388be110b1a52d221db6f6c84c66ea917b93faf17b5053.scope - libcontainer container 7b385b8c5cf8d4af74388be110b1a52d221db6f6c84c66ea917b93faf17b5053. Jan 29 11:51:29.409403 containerd[2021]: time="2025-01-29T11:51:29.408828511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-g6257,Uid:8a73c85d-07a4-49bf-9f76-7607bae85a05,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:51:29.451196 containerd[2021]: time="2025-01-29T11:51:29.450882811Z" level=info msg="StartContainer for \"7b385b8c5cf8d4af74388be110b1a52d221db6f6c84c66ea917b93faf17b5053\" returns successfully" Jan 29 11:51:29.477819 containerd[2021]: time="2025-01-29T11:51:29.476537755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:29.477819 containerd[2021]: time="2025-01-29T11:51:29.476648179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:29.477819 containerd[2021]: time="2025-01-29T11:51:29.476688691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:29.477819 containerd[2021]: time="2025-01-29T11:51:29.476864575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:29.519928 systemd[1]: Started cri-containerd-d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527.scope - libcontainer container d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527. Jan 29 11:51:29.611775 containerd[2021]: time="2025-01-29T11:51:29.611699660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-g6257,Uid:8a73c85d-07a4-49bf-9f76-7607bae85a05,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527\"" Jan 29 11:51:29.617737 containerd[2021]: time="2025-01-29T11:51:29.617669192Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:51:30.466094 kubelet[3506]: I0129 11:51:30.465348 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z787m" podStartSLOduration=2.465324692 podStartE2EDuration="2.465324692s" podCreationTimestamp="2025-01-29 11:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:51:30.064717626 +0000 UTC m=+7.342883389" watchObservedRunningTime="2025-01-29 11:51:30.465324692 +0000 UTC m=+7.743490455" Jan 29 11:51:31.654860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630394441.mount: Deactivated successfully. Jan 29 11:51:32.286666 containerd[2021]: time="2025-01-29T11:51:32.286575405Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:32.288546 containerd[2021]: time="2025-01-29T11:51:32.288465177Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 11:51:32.291131 containerd[2021]: time="2025-01-29T11:51:32.290963025Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:32.297763 containerd[2021]: time="2025-01-29T11:51:32.297679101Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:32.300105 containerd[2021]: time="2025-01-29T11:51:32.299869653Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.682133045s" Jan 29 11:51:32.300105 containerd[2021]: time="2025-01-29T11:51:32.299934429Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 11:51:32.306225 containerd[2021]: time="2025-01-29T11:51:32.306155457Z" level=info msg="CreateContainer within sandbox \"d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:51:32.332610 containerd[2021]: time="2025-01-29T11:51:32.332519841Z" level=info msg="CreateContainer within sandbox \"d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8\"" Jan 29 11:51:32.333688 containerd[2021]: time="2025-01-29T11:51:32.333531561Z" level=info msg="StartContainer for \"236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8\"" Jan 29 11:51:32.407414 systemd[1]: Started cri-containerd-236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8.scope - libcontainer container 236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8. Jan 29 11:51:32.456557 containerd[2021]: time="2025-01-29T11:51:32.456024442Z" level=info msg="StartContainer for \"236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8\" returns successfully" Jan 29 11:51:33.099117 kubelet[3506]: I0129 11:51:33.098964 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-g6257" podStartSLOduration=1.4110806839999999 podStartE2EDuration="4.098939241s" podCreationTimestamp="2025-01-29 11:51:29 +0000 UTC" firstStartedPulling="2025-01-29 11:51:29.614066132 +0000 UTC m=+6.892231871" lastFinishedPulling="2025-01-29 11:51:32.301924677 +0000 UTC m=+9.580090428" observedRunningTime="2025-01-29 11:51:33.097747029 +0000 UTC m=+10.375913056" watchObservedRunningTime="2025-01-29 11:51:33.098939241 +0000 UTC m=+10.377105100" Jan 29 11:51:36.807123 systemd[1]: Created slice kubepods-besteffort-podbce51d3f_aa62_4fe2_a22f_597354f20001.slice - libcontainer container kubepods-besteffort-podbce51d3f_aa62_4fe2_a22f_597354f20001.slice. Jan 29 11:51:36.910132 kubelet[3506]: I0129 11:51:36.909563 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8wg\" (UniqueName: \"kubernetes.io/projected/bce51d3f-aa62-4fe2-a22f-597354f20001-kube-api-access-xj8wg\") pod \"calico-typha-64b7bf85b4-mjxtw\" (UID: \"bce51d3f-aa62-4fe2-a22f-597354f20001\") " pod="calico-system/calico-typha-64b7bf85b4-mjxtw" Jan 29 11:51:36.910132 kubelet[3506]: I0129 11:51:36.909660 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bce51d3f-aa62-4fe2-a22f-597354f20001-tigera-ca-bundle\") pod \"calico-typha-64b7bf85b4-mjxtw\" (UID: \"bce51d3f-aa62-4fe2-a22f-597354f20001\") " pod="calico-system/calico-typha-64b7bf85b4-mjxtw" Jan 29 11:51:36.910132 kubelet[3506]: I0129 11:51:36.909730 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bce51d3f-aa62-4fe2-a22f-597354f20001-typha-certs\") pod \"calico-typha-64b7bf85b4-mjxtw\" (UID: \"bce51d3f-aa62-4fe2-a22f-597354f20001\") " pod="calico-system/calico-typha-64b7bf85b4-mjxtw" Jan 29 11:51:37.049885 systemd[1]: Created slice kubepods-besteffort-pod467871b2_ba2a_477a_986c_ec239f655fdb.slice - libcontainer container kubepods-besteffort-pod467871b2_ba2a_477a_986c_ec239f655fdb.slice. Jan 29 11:51:37.110580 kubelet[3506]: I0129 11:51:37.110417 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-flexvol-driver-host\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113011 kubelet[3506]: I0129 11:51:37.111962 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9b2\" (UniqueName: \"kubernetes.io/projected/467871b2-ba2a-477a-986c-ec239f655fdb-kube-api-access-rq9b2\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113011 kubelet[3506]: I0129 11:51:37.112118 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-lib-modules\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113011 kubelet[3506]: I0129 11:51:37.112167 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-xtables-lock\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113011 kubelet[3506]: I0129 11:51:37.112285 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-policysync\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113011 kubelet[3506]: I0129 11:51:37.112326 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-var-lib-calico\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113465 kubelet[3506]: I0129 11:51:37.112368 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/467871b2-ba2a-477a-986c-ec239f655fdb-tigera-ca-bundle\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113465 kubelet[3506]: I0129 11:51:37.112413 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/467871b2-ba2a-477a-986c-ec239f655fdb-node-certs\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113465 kubelet[3506]: I0129 11:51:37.112451 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-var-run-calico\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113465 kubelet[3506]: I0129 11:51:37.112488 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-cni-net-dir\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113465 kubelet[3506]: I0129 11:51:37.112536 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-cni-log-dir\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.113750 kubelet[3506]: I0129 11:51:37.112605 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/467871b2-ba2a-477a-986c-ec239f655fdb-cni-bin-dir\") pod \"calico-node-zsl8s\" (UID: \"467871b2-ba2a-477a-986c-ec239f655fdb\") " pod="calico-system/calico-node-zsl8s" Jan 29 11:51:37.116442 containerd[2021]: time="2025-01-29T11:51:37.116319193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b7bf85b4-mjxtw,Uid:bce51d3f-aa62-4fe2-a22f-597354f20001,Namespace:calico-system,Attempt:0,}" Jan 29 11:51:37.188729 containerd[2021]: time="2025-01-29T11:51:37.187342334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:37.188729 containerd[2021]: time="2025-01-29T11:51:37.187462166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:37.188729 containerd[2021]: time="2025-01-29T11:51:37.187491362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:37.188729 containerd[2021]: time="2025-01-29T11:51:37.187659518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:37.241152 kubelet[3506]: E0129 11:51:37.239968 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.241152 kubelet[3506]: W0129 11:51:37.240012 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.241152 kubelet[3506]: E0129 11:51:37.240050 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.241152 kubelet[3506]: E0129 11:51:37.240480 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:37.258737 kubelet[3506]: E0129 11:51:37.258580 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.258737 kubelet[3506]: W0129 11:51:37.258649 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.258737 kubelet[3506]: E0129 11:51:37.258688 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.264472 systemd[1]: Started cri-containerd-78bd543b3552b652522d72df2430bb893e4c5ed3d1892207a45614f98b3057c2.scope - libcontainer container 78bd543b3552b652522d72df2430bb893e4c5ed3d1892207a45614f98b3057c2. Jan 29 11:51:37.291574 kubelet[3506]: E0129 11:51:37.291219 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.291574 kubelet[3506]: W0129 11:51:37.291264 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.291574 kubelet[3506]: E0129 11:51:37.291329 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.297127 kubelet[3506]: E0129 11:51:37.296211 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.297127 kubelet[3506]: W0129 11:51:37.296576 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.298530 kubelet[3506]: E0129 11:51:37.297604 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.303851 kubelet[3506]: E0129 11:51:37.302936 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.303851 kubelet[3506]: W0129 11:51:37.302980 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.303851 kubelet[3506]: E0129 11:51:37.303014 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.311137 kubelet[3506]: E0129 11:51:37.307886 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.311137 kubelet[3506]: W0129 11:51:37.307939 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.311137 kubelet[3506]: E0129 11:51:37.307975 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.313332 kubelet[3506]: E0129 11:51:37.313276 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.313332 kubelet[3506]: W0129 11:51:37.313320 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.313579 kubelet[3506]: E0129 11:51:37.313356 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.315512 kubelet[3506]: E0129 11:51:37.315457 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.315512 kubelet[3506]: W0129 11:51:37.315500 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.315725 kubelet[3506]: E0129 11:51:37.315536 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.316222 kubelet[3506]: E0129 11:51:37.315983 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.316222 kubelet[3506]: W0129 11:51:37.316020 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.316222 kubelet[3506]: E0129 11:51:37.316057 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.318906 kubelet[3506]: E0129 11:51:37.318435 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.318906 kubelet[3506]: W0129 11:51:37.318477 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.318906 kubelet[3506]: E0129 11:51:37.318514 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.319373 kubelet[3506]: E0129 11:51:37.319036 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.319373 kubelet[3506]: W0129 11:51:37.319065 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.319373 kubelet[3506]: E0129 11:51:37.319147 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.319617 kubelet[3506]: E0129 11:51:37.319584 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.319617 kubelet[3506]: W0129 11:51:37.319611 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.319736 kubelet[3506]: E0129 11:51:37.319641 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.323251 kubelet[3506]: E0129 11:51:37.322266 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.323251 kubelet[3506]: W0129 11:51:37.322310 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.323251 kubelet[3506]: E0129 11:51:37.322350 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.323251 kubelet[3506]: E0129 11:51:37.322790 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.323251 kubelet[3506]: W0129 11:51:37.322817 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.323251 kubelet[3506]: E0129 11:51:37.322850 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.323715 kubelet[3506]: E0129 11:51:37.323337 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.323715 kubelet[3506]: W0129 11:51:37.323362 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.323715 kubelet[3506]: E0129 11:51:37.323406 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.324500 kubelet[3506]: E0129 11:51:37.324446 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.324500 kubelet[3506]: W0129 11:51:37.324488 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.326690 kubelet[3506]: E0129 11:51:37.324536 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.326690 kubelet[3506]: E0129 11:51:37.326233 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.326690 kubelet[3506]: W0129 11:51:37.326264 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.326690 kubelet[3506]: E0129 11:51:37.326299 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.327150 kubelet[3506]: E0129 11:51:37.326821 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.327150 kubelet[3506]: W0129 11:51:37.326848 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.327150 kubelet[3506]: E0129 11:51:37.326881 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.328997 kubelet[3506]: E0129 11:51:37.328643 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.328997 kubelet[3506]: W0129 11:51:37.328683 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.328997 kubelet[3506]: E0129 11:51:37.328826 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.330671 kubelet[3506]: E0129 11:51:37.330509 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.330671 kubelet[3506]: W0129 11:51:37.330547 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.330671 kubelet[3506]: E0129 11:51:37.330607 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.333267 kubelet[3506]: E0129 11:51:37.332380 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.333267 kubelet[3506]: W0129 11:51:37.332426 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.333267 kubelet[3506]: E0129 11:51:37.332465 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.338585 kubelet[3506]: E0129 11:51:37.338282 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.338585 kubelet[3506]: W0129 11:51:37.338328 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.338585 kubelet[3506]: E0129 11:51:37.338362 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.339810 kubelet[3506]: E0129 11:51:37.339373 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.339810 kubelet[3506]: W0129 11:51:37.339412 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.339810 kubelet[3506]: E0129 11:51:37.339449 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.343629 kubelet[3506]: E0129 11:51:37.343271 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.343629 kubelet[3506]: W0129 11:51:37.343312 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.343629 kubelet[3506]: E0129 11:51:37.343349 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.343629 kubelet[3506]: I0129 11:51:37.343401 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2-kubelet-dir\") pod \"csi-node-driver-bmlgf\" (UID: \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\") " pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:37.348126 kubelet[3506]: E0129 11:51:37.346217 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.348126 kubelet[3506]: W0129 11:51:37.346263 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.348126 kubelet[3506]: E0129 11:51:37.346299 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.348126 kubelet[3506]: I0129 11:51:37.346346 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2-socket-dir\") pod \"csi-node-driver-bmlgf\" (UID: \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\") " pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:37.348126 kubelet[3506]: E0129 11:51:37.347194 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.348126 kubelet[3506]: W0129 11:51:37.347229 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.348126 kubelet[3506]: E0129 11:51:37.347263 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.348126 kubelet[3506]: I0129 11:51:37.347307 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2-varrun\") pod \"csi-node-driver-bmlgf\" (UID: \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\") " pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:37.351279 kubelet[3506]: E0129 11:51:37.350875 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.351279 kubelet[3506]: W0129 11:51:37.350917 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.351279 kubelet[3506]: E0129 11:51:37.350956 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.351279 kubelet[3506]: I0129 11:51:37.351002 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84gz\" (UniqueName: \"kubernetes.io/projected/b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2-kube-api-access-l84gz\") pod \"csi-node-driver-bmlgf\" (UID: \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\") " pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:37.353608 kubelet[3506]: E0129 11:51:37.353357 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.353608 kubelet[3506]: W0129 11:51:37.353401 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.353608 kubelet[3506]: E0129 11:51:37.353458 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.354403 kubelet[3506]: I0129 11:51:37.354132 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2-registration-dir\") pod \"csi-node-driver-bmlgf\" (UID: \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\") " pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:37.355347 kubelet[3506]: E0129 11:51:37.355266 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.355347 kubelet[3506]: W0129 11:51:37.355303 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.357232 kubelet[3506]: E0129 11:51:37.355622 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.357232 kubelet[3506]: E0129 11:51:37.357149 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.357232 kubelet[3506]: W0129 11:51:37.357185 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.358296 kubelet[3506]: E0129 11:51:37.357584 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.359832 kubelet[3506]: E0129 11:51:37.359473 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.359832 kubelet[3506]: W0129 11:51:37.359510 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.359832 kubelet[3506]: E0129 11:51:37.359575 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.362807 kubelet[3506]: E0129 11:51:37.362682 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.363131 kubelet[3506]: W0129 11:51:37.362944 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.363131 kubelet[3506]: E0129 11:51:37.363024 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.366261 kubelet[3506]: E0129 11:51:37.363832 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.366261 kubelet[3506]: W0129 11:51:37.363872 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.366261 kubelet[3506]: E0129 11:51:37.365973 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.369117 kubelet[3506]: E0129 11:51:37.367017 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.369117 kubelet[3506]: W0129 11:51:37.367054 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.369117 kubelet[3506]: E0129 11:51:37.367167 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.371756 kubelet[3506]: E0129 11:51:37.369947 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.371756 kubelet[3506]: W0129 11:51:37.369993 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.371756 kubelet[3506]: E0129 11:51:37.370030 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.375333 kubelet[3506]: E0129 11:51:37.373597 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.375333 kubelet[3506]: W0129 11:51:37.373635 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.375333 kubelet[3506]: E0129 11:51:37.373676 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.376664 kubelet[3506]: E0129 11:51:37.376621 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.377746 kubelet[3506]: W0129 11:51:37.377105 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.377746 kubelet[3506]: E0129 11:51:37.377164 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.379803 kubelet[3506]: E0129 11:51:37.379656 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.379803 kubelet[3506]: W0129 11:51:37.379696 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.379803 kubelet[3506]: E0129 11:51:37.379730 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.390517 containerd[2021]: time="2025-01-29T11:51:37.390373395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zsl8s,Uid:467871b2-ba2a-477a-986c-ec239f655fdb,Namespace:calico-system,Attempt:0,}" Jan 29 11:51:37.455985 kubelet[3506]: E0129 11:51:37.455608 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.455985 kubelet[3506]: W0129 11:51:37.455670 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.455985 kubelet[3506]: E0129 11:51:37.455705 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.458329 kubelet[3506]: E0129 11:51:37.457889 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.458329 kubelet[3506]: W0129 11:51:37.457941 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.459142 kubelet[3506]: E0129 11:51:37.458048 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.460269 kubelet[3506]: E0129 11:51:37.460002 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.460269 kubelet[3506]: W0129 11:51:37.460045 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.461152 kubelet[3506]: E0129 11:51:37.460745 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.462675 kubelet[3506]: E0129 11:51:37.462361 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.462675 kubelet[3506]: W0129 11:51:37.462617 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.467789 kubelet[3506]: E0129 11:51:37.463288 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.467789 kubelet[3506]: E0129 11:51:37.466175 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.467789 kubelet[3506]: W0129 11:51:37.466206 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.467789 kubelet[3506]: E0129 11:51:37.466248 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.467789 kubelet[3506]: E0129 11:51:37.467315 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.467789 kubelet[3506]: W0129 11:51:37.467347 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.467789 kubelet[3506]: E0129 11:51:37.467471 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.469464 kubelet[3506]: E0129 11:51:37.469235 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.469464 kubelet[3506]: W0129 11:51:37.469277 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.469464 kubelet[3506]: E0129 11:51:37.469414 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.470856 kubelet[3506]: E0129 11:51:37.470815 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.471168 kubelet[3506]: W0129 11:51:37.471133 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.471904 kubelet[3506]: E0129 11:51:37.471689 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.473309 kubelet[3506]: E0129 11:51:37.472775 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.473309 kubelet[3506]: W0129 11:51:37.473034 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.473675 kubelet[3506]: E0129 11:51:37.473598 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.476267 kubelet[3506]: E0129 11:51:37.475793 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.476267 kubelet[3506]: W0129 11:51:37.475831 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.477992 kubelet[3506]: E0129 11:51:37.477393 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.479421 kubelet[3506]: E0129 11:51:37.479052 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.479421 kubelet[3506]: W0129 11:51:37.479156 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.479421 kubelet[3506]: E0129 11:51:37.479329 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.480200 containerd[2021]: time="2025-01-29T11:51:37.478827615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:51:37.481422 kubelet[3506]: E0129 11:51:37.481213 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.481422 kubelet[3506]: W0129 11:51:37.481252 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.485416 kubelet[3506]: E0129 11:51:37.484491 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.485416 kubelet[3506]: E0129 11:51:37.485033 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.485416 kubelet[3506]: W0129 11:51:37.485062 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.485416 kubelet[3506]: E0129 11:51:37.485345 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.486580 kubelet[3506]: E0129 11:51:37.486063 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.487348 kubelet[3506]: W0129 11:51:37.486777 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.487672 kubelet[3506]: E0129 11:51:37.487598 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.488269 containerd[2021]: time="2025-01-29T11:51:37.486213999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:51:37.488269 containerd[2021]: time="2025-01-29T11:51:37.486267879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:37.488269 containerd[2021]: time="2025-01-29T11:51:37.486458403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:51:37.489678 kubelet[3506]: E0129 11:51:37.489313 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.489678 kubelet[3506]: W0129 11:51:37.489355 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.491850 kubelet[3506]: E0129 11:51:37.490440 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.492931 kubelet[3506]: E0129 11:51:37.492415 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.492931 kubelet[3506]: W0129 11:51:37.492456 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.492931 kubelet[3506]: E0129 11:51:37.492788 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.494030 kubelet[3506]: E0129 11:51:37.493987 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.494412 kubelet[3506]: W0129 11:51:37.494267 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.494927 kubelet[3506]: E0129 11:51:37.494782 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.498795 kubelet[3506]: E0129 11:51:37.497832 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.498795 kubelet[3506]: W0129 11:51:37.497895 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.498795 kubelet[3506]: E0129 11:51:37.498230 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.498795 kubelet[3506]: E0129 11:51:37.498715 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.498795 kubelet[3506]: W0129 11:51:37.498749 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.499413 kubelet[3506]: E0129 11:51:37.499335 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.503447 kubelet[3506]: E0129 11:51:37.502515 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.503447 kubelet[3506]: W0129 11:51:37.502556 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.503447 kubelet[3506]: E0129 11:51:37.503197 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.504675 kubelet[3506]: E0129 11:51:37.504632 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.505501 kubelet[3506]: W0129 11:51:37.504958 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.506773 kubelet[3506]: E0129 11:51:37.506321 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.507536 kubelet[3506]: E0129 11:51:37.507427 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.508816 kubelet[3506]: W0129 11:51:37.508471 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.509235 kubelet[3506]: E0129 11:51:37.509152 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.512352 kubelet[3506]: E0129 11:51:37.511419 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.512352 kubelet[3506]: W0129 11:51:37.511461 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.512352 kubelet[3506]: E0129 11:51:37.511945 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.514132 kubelet[3506]: E0129 11:51:37.513532 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.514132 kubelet[3506]: W0129 11:51:37.513573 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.514132 kubelet[3506]: E0129 11:51:37.513619 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.518430 kubelet[3506]: E0129 11:51:37.518133 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.518430 kubelet[3506]: W0129 11:51:37.518262 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.518430 kubelet[3506]: E0129 11:51:37.518302 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.563466 systemd[1]: Started cri-containerd-821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6.scope - libcontainer container 821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6. Jan 29 11:51:37.600780 kubelet[3506]: E0129 11:51:37.600690 3506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:51:37.601300 kubelet[3506]: W0129 11:51:37.601036 3506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:51:37.601585 kubelet[3506]: E0129 11:51:37.601480 3506 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:51:37.696558 containerd[2021]: time="2025-01-29T11:51:37.696249088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zsl8s,Uid:467871b2-ba2a-477a-986c-ec239f655fdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\"" Jan 29 11:51:37.706538 containerd[2021]: time="2025-01-29T11:51:37.705221092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:51:37.790501 containerd[2021]: time="2025-01-29T11:51:37.790336805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b7bf85b4-mjxtw,Uid:bce51d3f-aa62-4fe2-a22f-597354f20001,Namespace:calico-system,Attempt:0,} returns sandbox id \"78bd543b3552b652522d72df2430bb893e4c5ed3d1892207a45614f98b3057c2\"" Jan 29 11:51:38.934147 kubelet[3506]: E0129 11:51:38.933769 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:39.435337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073484795.mount: Deactivated successfully. Jan 29 11:51:39.669124 containerd[2021]: time="2025-01-29T11:51:39.666739962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:39.671736 containerd[2021]: time="2025-01-29T11:51:39.671584062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 11:51:39.672516 containerd[2021]: time="2025-01-29T11:51:39.672384750Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:39.685178 containerd[2021]: time="2025-01-29T11:51:39.684415830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:39.686670 containerd[2021]: time="2025-01-29T11:51:39.686505954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.98026143s" Jan 29 11:51:39.686902 containerd[2021]: time="2025-01-29T11:51:39.686857902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 11:51:39.690065 containerd[2021]: time="2025-01-29T11:51:39.689997522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:51:39.693341 containerd[2021]: time="2025-01-29T11:51:39.693261030Z" level=info msg="CreateContainer within sandbox \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:51:39.728128 containerd[2021]: time="2025-01-29T11:51:39.727542522Z" level=info msg="CreateContainer within sandbox \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3\"" Jan 29 11:51:39.730688 containerd[2021]: time="2025-01-29T11:51:39.729701082Z" level=info msg="StartContainer for \"63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3\"" Jan 29 11:51:39.804507 systemd[1]: Started cri-containerd-63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3.scope - libcontainer container 63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3. Jan 29 11:51:39.875991 containerd[2021]: time="2025-01-29T11:51:39.875897707Z" level=info msg="StartContainer for \"63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3\" returns successfully" Jan 29 11:51:39.924528 systemd[1]: cri-containerd-63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3.scope: Deactivated successfully. Jan 29 11:51:40.098172 containerd[2021]: time="2025-01-29T11:51:40.098014660Z" level=info msg="shim disconnected" id=63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3 namespace=k8s.io Jan 29 11:51:40.098172 containerd[2021]: time="2025-01-29T11:51:40.098137168Z" level=warning msg="cleaning up after shim disconnected" id=63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3 namespace=k8s.io Jan 29 11:51:40.098172 containerd[2021]: time="2025-01-29T11:51:40.098163616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:51:40.363545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63b0a32e33007a71a93bf6056a3ca94a412655792367671d5891fc59e27e77f3-rootfs.mount: Deactivated successfully. Jan 29 11:51:40.934937 kubelet[3506]: E0129 11:51:40.934667 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:42.085313 containerd[2021]: time="2025-01-29T11:51:42.084296622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:42.086562 containerd[2021]: time="2025-01-29T11:51:42.086382462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Jan 29 11:51:42.089144 containerd[2021]: time="2025-01-29T11:51:42.088999086Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:42.096361 containerd[2021]: time="2025-01-29T11:51:42.096222822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:42.098619 containerd[2021]: time="2025-01-29T11:51:42.098380206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.408314392s" Jan 29 11:51:42.098619 containerd[2021]: time="2025-01-29T11:51:42.098451678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 11:51:42.101483 containerd[2021]: time="2025-01-29T11:51:42.101156802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:51:42.141167 containerd[2021]: time="2025-01-29T11:51:42.141056742Z" level=info msg="CreateContainer within sandbox \"78bd543b3552b652522d72df2430bb893e4c5ed3d1892207a45614f98b3057c2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:51:42.176210 containerd[2021]: time="2025-01-29T11:51:42.175964814Z" level=info msg="CreateContainer within sandbox \"78bd543b3552b652522d72df2430bb893e4c5ed3d1892207a45614f98b3057c2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b8f316b269ffb497e11fe02ce81fceec658b34edb35a8550596e68b8d8bf1435\"" Jan 29 11:51:42.178241 containerd[2021]: time="2025-01-29T11:51:42.177145890Z" level=info msg="StartContainer for \"b8f316b269ffb497e11fe02ce81fceec658b34edb35a8550596e68b8d8bf1435\"" Jan 29 11:51:42.239453 systemd[1]: Started cri-containerd-b8f316b269ffb497e11fe02ce81fceec658b34edb35a8550596e68b8d8bf1435.scope - libcontainer container b8f316b269ffb497e11fe02ce81fceec658b34edb35a8550596e68b8d8bf1435. Jan 29 11:51:42.317810 containerd[2021]: time="2025-01-29T11:51:42.317672251Z" level=info msg="StartContainer for \"b8f316b269ffb497e11fe02ce81fceec658b34edb35a8550596e68b8d8bf1435\" returns successfully" Jan 29 11:51:42.935137 kubelet[3506]: E0129 11:51:42.934052 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:44.150214 kubelet[3506]: I0129 11:51:44.149948 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:51:44.934240 kubelet[3506]: E0129 11:51:44.934138 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:46.936111 kubelet[3506]: E0129 11:51:46.933926 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:47.558570 containerd[2021]: time="2025-01-29T11:51:47.558485221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:47.561626 containerd[2021]: time="2025-01-29T11:51:47.561545977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 11:51:47.564245 containerd[2021]: time="2025-01-29T11:51:47.564134005Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:47.571427 containerd[2021]: time="2025-01-29T11:51:47.571314505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:47.573228 containerd[2021]: time="2025-01-29T11:51:47.573151417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 5.471856627s" Jan 29 11:51:47.573228 containerd[2021]: time="2025-01-29T11:51:47.573222505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 11:51:47.579302 containerd[2021]: time="2025-01-29T11:51:47.578693365Z" level=info msg="CreateContainer within sandbox \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:51:47.611210 containerd[2021]: time="2025-01-29T11:51:47.611068573Z" level=info msg="CreateContainer within sandbox \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83\"" Jan 29 11:51:47.613182 containerd[2021]: time="2025-01-29T11:51:47.612053905Z" level=info msg="StartContainer for \"8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83\"" Jan 29 11:51:47.685911 systemd[1]: run-containerd-runc-k8s.io-8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83-runc.EgLfXB.mount: Deactivated successfully. Jan 29 11:51:47.698475 systemd[1]: Started cri-containerd-8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83.scope - libcontainer container 8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83. Jan 29 11:51:47.757852 containerd[2021]: time="2025-01-29T11:51:47.757624274Z" level=info msg="StartContainer for \"8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83\" returns successfully" Jan 29 11:51:48.210990 kubelet[3506]: I0129 11:51:48.210871 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64b7bf85b4-mjxtw" podStartSLOduration=7.903847907 podStartE2EDuration="12.210844056s" podCreationTimestamp="2025-01-29 11:51:36 +0000 UTC" firstStartedPulling="2025-01-29 11:51:37.793409009 +0000 UTC m=+15.071574748" lastFinishedPulling="2025-01-29 11:51:42.100404834 +0000 UTC m=+19.378570897" observedRunningTime="2025-01-29 11:51:43.166657687 +0000 UTC m=+20.444823438" watchObservedRunningTime="2025-01-29 11:51:48.210844056 +0000 UTC m=+25.489009795" Jan 29 11:51:48.678407 containerd[2021]: time="2025-01-29T11:51:48.678338295Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:51:48.685183 systemd[1]: cri-containerd-8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83.scope: Deactivated successfully. Jan 29 11:51:48.717267 kubelet[3506]: I0129 11:51:48.716967 3506 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:51:48.739684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83-rootfs.mount: Deactivated successfully. Jan 29 11:51:48.811965 systemd[1]: Created slice kubepods-besteffort-pod9a019af1_803b_44eb_a929_7256065a6820.slice - libcontainer container kubepods-besteffort-pod9a019af1_803b_44eb_a929_7256065a6820.slice. Jan 29 11:51:48.839683 systemd[1]: Created slice kubepods-burstable-pod0469310f_6c9c_4d38_9ef3_1ec8ed658901.slice - libcontainer container kubepods-burstable-pod0469310f_6c9c_4d38_9ef3_1ec8ed658901.slice. Jan 29 11:51:48.852544 kubelet[3506]: W0129 11:51:48.852477 3506 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-25-252" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-252' and this object Jan 29 11:51:48.852815 kubelet[3506]: E0129 11:51:48.852571 3506 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-25-252\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-252' and this object" logger="UnhandledError" Jan 29 11:51:48.866531 systemd[1]: Created slice kubepods-besteffort-pod31697fa8_c2a5_4e89_a21e_74d67fa947f6.slice - libcontainer container kubepods-besteffort-pod31697fa8_c2a5_4e89_a21e_74d67fa947f6.slice. Jan 29 11:51:48.882913 kubelet[3506]: I0129 11:51:48.882536 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0469310f-6c9c-4d38-9ef3-1ec8ed658901-config-volume\") pod \"coredns-668d6bf9bc-fksp9\" (UID: \"0469310f-6c9c-4d38-9ef3-1ec8ed658901\") " pod="kube-system/coredns-668d6bf9bc-fksp9" Jan 29 11:51:48.882913 kubelet[3506]: I0129 11:51:48.882634 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a019af1-803b-44eb-a929-7256065a6820-calico-apiserver-certs\") pod \"calico-apiserver-9c789bfd7-nfxg2\" (UID: \"9a019af1-803b-44eb-a929-7256065a6820\") " pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" Jan 29 11:51:48.882913 kubelet[3506]: I0129 11:51:48.882683 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9vl\" (UniqueName: \"kubernetes.io/projected/9a019af1-803b-44eb-a929-7256065a6820-kube-api-access-nh9vl\") pod \"calico-apiserver-9c789bfd7-nfxg2\" (UID: \"9a019af1-803b-44eb-a929-7256065a6820\") " pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" Jan 29 11:51:48.882913 kubelet[3506]: I0129 11:51:48.882741 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pch2j\" (UniqueName: \"kubernetes.io/projected/0469310f-6c9c-4d38-9ef3-1ec8ed658901-kube-api-access-pch2j\") pod \"coredns-668d6bf9bc-fksp9\" (UID: \"0469310f-6c9c-4d38-9ef3-1ec8ed658901\") " pod="kube-system/coredns-668d6bf9bc-fksp9" Jan 29 11:51:48.882913 kubelet[3506]: I0129 11:51:48.882782 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31697fa8-c2a5-4e89-a21e-74d67fa947f6-calico-apiserver-certs\") pod \"calico-apiserver-9c789bfd7-xd4wr\" (UID: \"31697fa8-c2a5-4e89-a21e-74d67fa947f6\") " pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" Jan 29 11:51:48.883431 kubelet[3506]: I0129 11:51:48.882823 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnkg\" (UniqueName: \"kubernetes.io/projected/31697fa8-c2a5-4e89-a21e-74d67fa947f6-kube-api-access-7vnkg\") pod \"calico-apiserver-9c789bfd7-xd4wr\" (UID: \"31697fa8-c2a5-4e89-a21e-74d67fa947f6\") " pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" Jan 29 11:51:48.917176 systemd[1]: Created slice kubepods-burstable-pod317d0f0f_daf3_4642_ac24_1b9a8ffb1530.slice - libcontainer container kubepods-burstable-pod317d0f0f_daf3_4642_ac24_1b9a8ffb1530.slice. Jan 29 11:51:48.945176 systemd[1]: Created slice kubepods-besteffort-podcaa493ea_026d_43ba_9a30_a9a39462916f.slice - libcontainer container kubepods-besteffort-podcaa493ea_026d_43ba_9a30_a9a39462916f.slice. Jan 29 11:51:48.975056 systemd[1]: Created slice kubepods-besteffort-podb30c1856_9cdc_4b5b_8c49_553cd3b8a9f2.slice - libcontainer container kubepods-besteffort-podb30c1856_9cdc_4b5b_8c49_553cd3b8a9f2.slice. Jan 29 11:51:48.984629 kubelet[3506]: I0129 11:51:48.983168 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa493ea-026d-43ba-9a30-a9a39462916f-tigera-ca-bundle\") pod \"calico-kube-controllers-855db7d7f9-phjqx\" (UID: \"caa493ea-026d-43ba-9a30-a9a39462916f\") " pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" Jan 29 11:51:48.984629 kubelet[3506]: I0129 11:51:48.983253 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5q65\" (UniqueName: \"kubernetes.io/projected/317d0f0f-daf3-4642-ac24-1b9a8ffb1530-kube-api-access-p5q65\") pod \"coredns-668d6bf9bc-mqdbw\" (UID: \"317d0f0f-daf3-4642-ac24-1b9a8ffb1530\") " pod="kube-system/coredns-668d6bf9bc-mqdbw" Jan 29 11:51:48.984629 kubelet[3506]: I0129 11:51:48.983385 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhk4r\" (UniqueName: \"kubernetes.io/projected/caa493ea-026d-43ba-9a30-a9a39462916f-kube-api-access-nhk4r\") pod \"calico-kube-controllers-855db7d7f9-phjqx\" (UID: \"caa493ea-026d-43ba-9a30-a9a39462916f\") " pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" Jan 29 11:51:48.984629 kubelet[3506]: I0129 11:51:48.983439 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/317d0f0f-daf3-4642-ac24-1b9a8ffb1530-config-volume\") pod \"coredns-668d6bf9bc-mqdbw\" (UID: \"317d0f0f-daf3-4642-ac24-1b9a8ffb1530\") " pod="kube-system/coredns-668d6bf9bc-mqdbw" Jan 29 11:51:48.986847 containerd[2021]: time="2025-01-29T11:51:48.986778556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmlgf,Uid:b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2,Namespace:calico-system,Attempt:0,}" Jan 29 11:51:49.135226 containerd[2021]: time="2025-01-29T11:51:49.132667525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-nfxg2,Uid:9a019af1-803b-44eb-a929-7256065a6820,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:51:49.177516 containerd[2021]: time="2025-01-29T11:51:49.177443077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-xd4wr,Uid:31697fa8-c2a5-4e89-a21e-74d67fa947f6,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:51:49.259140 containerd[2021]: time="2025-01-29T11:51:49.258728486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855db7d7f9-phjqx,Uid:caa493ea-026d-43ba-9a30-a9a39462916f,Namespace:calico-system,Attempt:0,}" Jan 29 11:51:49.987159 kubelet[3506]: E0129 11:51:49.986760 3506 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:51:49.987159 kubelet[3506]: E0129 11:51:49.986913 3506 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0469310f-6c9c-4d38-9ef3-1ec8ed658901-config-volume podName:0469310f-6c9c-4d38-9ef3-1ec8ed658901 nodeName:}" failed. No retries permitted until 2025-01-29 11:51:50.486875121 +0000 UTC m=+27.765040872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0469310f-6c9c-4d38-9ef3-1ec8ed658901-config-volume") pod "coredns-668d6bf9bc-fksp9" (UID: "0469310f-6c9c-4d38-9ef3-1ec8ed658901") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:51:50.030846 containerd[2021]: time="2025-01-29T11:51:50.030709297Z" level=info msg="shim disconnected" id=8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83 namespace=k8s.io Jan 29 11:51:50.034581 containerd[2021]: time="2025-01-29T11:51:50.030947653Z" level=warning msg="cleaning up after shim disconnected" id=8aa5b1e80536b300f2238788f570201a4d3bcad28f1f070e239331944e0a5c83 namespace=k8s.io Jan 29 11:51:50.034581 containerd[2021]: time="2025-01-29T11:51:50.030972733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:51:50.092714 kubelet[3506]: E0129 11:51:50.092149 3506 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:51:50.092714 kubelet[3506]: E0129 11:51:50.092252 3506 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/317d0f0f-daf3-4642-ac24-1b9a8ffb1530-config-volume podName:317d0f0f-daf3-4642-ac24-1b9a8ffb1530 nodeName:}" failed. No retries permitted until 2025-01-29 11:51:50.592226422 +0000 UTC m=+27.870392161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/317d0f0f-daf3-4642-ac24-1b9a8ffb1530-config-volume") pod "coredns-668d6bf9bc-mqdbw" (UID: "317d0f0f-daf3-4642-ac24-1b9a8ffb1530") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:51:50.121116 containerd[2021]: time="2025-01-29T11:51:50.120405602Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:51:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:51:50.219895 containerd[2021]: time="2025-01-29T11:51:50.219781634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:51:50.383470 containerd[2021]: time="2025-01-29T11:51:50.383402703Z" level=error msg="Failed to destroy network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.384331 containerd[2021]: time="2025-01-29T11:51:50.384272919Z" level=error msg="encountered an error cleaning up failed sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.385154 containerd[2021]: time="2025-01-29T11:51:50.384525015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-nfxg2,Uid:9a019af1-803b-44eb-a929-7256065a6820,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.385414 kubelet[3506]: E0129 11:51:50.384880 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.385414 kubelet[3506]: E0129 11:51:50.384975 3506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" Jan 29 11:51:50.385414 kubelet[3506]: E0129 11:51:50.385011 3506 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" Jan 29 11:51:50.386658 kubelet[3506]: E0129 11:51:50.385699 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c789bfd7-nfxg2_calico-apiserver(9a019af1-803b-44eb-a929-7256065a6820)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c789bfd7-nfxg2_calico-apiserver(9a019af1-803b-44eb-a929-7256065a6820)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" podUID="9a019af1-803b-44eb-a929-7256065a6820" Jan 29 11:51:50.392664 containerd[2021]: time="2025-01-29T11:51:50.392544015Z" level=error msg="Failed to destroy network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.393421 containerd[2021]: time="2025-01-29T11:51:50.393276051Z" level=error msg="encountered an error cleaning up failed sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.393421 containerd[2021]: time="2025-01-29T11:51:50.393385011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855db7d7f9-phjqx,Uid:caa493ea-026d-43ba-9a30-a9a39462916f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.395571 kubelet[3506]: E0129 11:51:50.395230 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.395571 kubelet[3506]: E0129 11:51:50.395481 3506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" Jan 29 11:51:50.396000 kubelet[3506]: E0129 11:51:50.395521 3506 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" Jan 29 11:51:50.396844 kubelet[3506]: E0129 11:51:50.396141 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-855db7d7f9-phjqx_calico-system(caa493ea-026d-43ba-9a30-a9a39462916f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-855db7d7f9-phjqx_calico-system(caa493ea-026d-43ba-9a30-a9a39462916f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" podUID="caa493ea-026d-43ba-9a30-a9a39462916f" Jan 29 11:51:50.419191 containerd[2021]: time="2025-01-29T11:51:50.419013339Z" level=error msg="Failed to destroy network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.419909 containerd[2021]: time="2025-01-29T11:51:50.419635311Z" level=error msg="encountered an error cleaning up failed sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.419909 containerd[2021]: time="2025-01-29T11:51:50.419730579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-xd4wr,Uid:31697fa8-c2a5-4e89-a21e-74d67fa947f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.420232 kubelet[3506]: E0129 11:51:50.420156 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.420303 kubelet[3506]: E0129 11:51:50.420229 3506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" Jan 29 11:51:50.420303 kubelet[3506]: E0129 11:51:50.420261 3506 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" Jan 29 11:51:50.420426 kubelet[3506]: E0129 11:51:50.420324 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c789bfd7-xd4wr_calico-apiserver(31697fa8-c2a5-4e89-a21e-74d67fa947f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c789bfd7-xd4wr_calico-apiserver(31697fa8-c2a5-4e89-a21e-74d67fa947f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" podUID="31697fa8-c2a5-4e89-a21e-74d67fa947f6" Jan 29 11:51:50.423841 containerd[2021]: time="2025-01-29T11:51:50.423769347Z" level=error msg="Failed to destroy network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.424489 containerd[2021]: time="2025-01-29T11:51:50.424420827Z" level=error msg="encountered an error cleaning up failed sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.424695 containerd[2021]: time="2025-01-29T11:51:50.424507779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmlgf,Uid:b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.424951 kubelet[3506]: E0129 11:51:50.424883 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.425193 kubelet[3506]: E0129 11:51:50.424964 3506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:50.425193 kubelet[3506]: E0129 11:51:50.424999 3506 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bmlgf" Jan 29 11:51:50.425193 kubelet[3506]: E0129 11:51:50.425140 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bmlgf_calico-system(b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bmlgf_calico-system(b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:50.654722 containerd[2021]: time="2025-01-29T11:51:50.654547192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fksp9,Uid:0469310f-6c9c-4d38-9ef3-1ec8ed658901,Namespace:kube-system,Attempt:0,}" Jan 29 11:51:50.734147 containerd[2021]: time="2025-01-29T11:51:50.732188681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqdbw,Uid:317d0f0f-daf3-4642-ac24-1b9a8ffb1530,Namespace:kube-system,Attempt:0,}" Jan 29 11:51:50.743949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2-shm.mount: Deactivated successfully. Jan 29 11:51:50.744292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc-shm.mount: Deactivated successfully. Jan 29 11:51:50.744523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad-shm.mount: Deactivated successfully. Jan 29 11:51:50.810628 containerd[2021]: time="2025-01-29T11:51:50.810514661Z" level=error msg="Failed to destroy network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.815650 containerd[2021]: time="2025-01-29T11:51:50.815560145Z" level=error msg="encountered an error cleaning up failed sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.817386 containerd[2021]: time="2025-01-29T11:51:50.815664041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fksp9,Uid:0469310f-6c9c-4d38-9ef3-1ec8ed658901,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.816737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817-shm.mount: Deactivated successfully. Jan 29 11:51:50.818705 kubelet[3506]: E0129 11:51:50.815978 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.818705 kubelet[3506]: E0129 11:51:50.816056 3506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fksp9" Jan 29 11:51:50.818705 kubelet[3506]: E0129 11:51:50.816114 3506 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fksp9" Jan 29 11:51:50.820426 kubelet[3506]: E0129 11:51:50.816175 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fksp9_kube-system(0469310f-6c9c-4d38-9ef3-1ec8ed658901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fksp9_kube-system(0469310f-6c9c-4d38-9ef3-1ec8ed658901)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fksp9" podUID="0469310f-6c9c-4d38-9ef3-1ec8ed658901" Jan 29 11:51:50.887445 containerd[2021]: time="2025-01-29T11:51:50.887349210Z" level=error msg="Failed to destroy network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.890661 containerd[2021]: time="2025-01-29T11:51:50.890585694Z" level=error msg="encountered an error cleaning up failed sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.890852 containerd[2021]: time="2025-01-29T11:51:50.890700834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqdbw,Uid:317d0f0f-daf3-4642-ac24-1b9a8ffb1530,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.891170 kubelet[3506]: E0129 11:51:50.890981 3506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:50.891170 kubelet[3506]: E0129 11:51:50.891054 3506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mqdbw" Jan 29 11:51:50.893875 kubelet[3506]: E0129 11:51:50.891192 3506 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mqdbw" Jan 29 11:51:50.893875 kubelet[3506]: E0129 11:51:50.891275 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mqdbw_kube-system(317d0f0f-daf3-4642-ac24-1b9a8ffb1530)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mqdbw_kube-system(317d0f0f-daf3-4642-ac24-1b9a8ffb1530)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mqdbw" podUID="317d0f0f-daf3-4642-ac24-1b9a8ffb1530" Jan 29 11:51:50.892682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998-shm.mount: Deactivated successfully. Jan 29 11:51:51.214559 kubelet[3506]: I0129 11:51:51.213753 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:51:51.215232 containerd[2021]: time="2025-01-29T11:51:51.214999467Z" level=info msg="StopPodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\"" Jan 29 11:51:51.215745 containerd[2021]: time="2025-01-29T11:51:51.215345679Z" level=info msg="Ensure that sandbox b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817 in task-service has been cleanup successfully" Jan 29 11:51:51.221841 kubelet[3506]: I0129 11:51:51.220356 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:51:51.225580 containerd[2021]: time="2025-01-29T11:51:51.225461007Z" level=info msg="StopPodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\"" Jan 29 11:51:51.228183 containerd[2021]: time="2025-01-29T11:51:51.227062623Z" level=info msg="Ensure that sandbox 41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2 in task-service has been cleanup successfully" Jan 29 11:51:51.230270 kubelet[3506]: I0129 11:51:51.229961 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:51:51.235514 containerd[2021]: time="2025-01-29T11:51:51.235285851Z" level=info msg="StopPodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\"" Jan 29 11:51:51.238292 containerd[2021]: time="2025-01-29T11:51:51.237816063Z" level=info msg="Ensure that sandbox d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998 in task-service has been cleanup successfully" Jan 29 11:51:51.240549 kubelet[3506]: I0129 11:51:51.239614 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:51:51.249136 containerd[2021]: time="2025-01-29T11:51:51.247557975Z" level=info msg="StopPodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\"" Jan 29 11:51:51.249672 containerd[2021]: time="2025-01-29T11:51:51.249620547Z" level=info msg="Ensure that sandbox 322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9 in task-service has been cleanup successfully" Jan 29 11:51:51.255562 kubelet[3506]: I0129 11:51:51.255240 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:51:51.264716 containerd[2021]: time="2025-01-29T11:51:51.263967567Z" level=info msg="StopPodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\"" Jan 29 11:51:51.268305 containerd[2021]: time="2025-01-29T11:51:51.268227508Z" level=info msg="Ensure that sandbox b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc in task-service has been cleanup successfully" Jan 29 11:51:51.277167 kubelet[3506]: I0129 11:51:51.277117 3506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:51:51.279368 containerd[2021]: time="2025-01-29T11:51:51.279298984Z" level=info msg="StopPodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\"" Jan 29 11:51:51.279667 containerd[2021]: time="2025-01-29T11:51:51.279610840Z" level=info msg="Ensure that sandbox 9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad in task-service has been cleanup successfully" Jan 29 11:51:51.404715 containerd[2021]: time="2025-01-29T11:51:51.404624476Z" level=error msg="StopPodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" failed" error="failed to destroy network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:51.405502 kubelet[3506]: E0129 11:51:51.405261 3506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:51:51.405502 kubelet[3506]: E0129 11:51:51.405382 3506 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817"} Jan 29 11:51:51.405909 kubelet[3506]: E0129 11:51:51.405785 3506 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0469310f-6c9c-4d38-9ef3-1ec8ed658901\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:51:51.406249 kubelet[3506]: E0129 11:51:51.405878 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0469310f-6c9c-4d38-9ef3-1ec8ed658901\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fksp9" podUID="0469310f-6c9c-4d38-9ef3-1ec8ed658901" Jan 29 11:51:51.421340 containerd[2021]: time="2025-01-29T11:51:51.421206976Z" level=error msg="StopPodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" failed" error="failed to destroy network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:51.422232 kubelet[3506]: E0129 11:51:51.421849 3506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:51:51.422232 kubelet[3506]: E0129 11:51:51.421922 3506 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998"} Jan 29 11:51:51.422232 kubelet[3506]: E0129 11:51:51.421978 3506 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"317d0f0f-daf3-4642-ac24-1b9a8ffb1530\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:51:51.422232 kubelet[3506]: E0129 11:51:51.422022 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"317d0f0f-daf3-4642-ac24-1b9a8ffb1530\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mqdbw" podUID="317d0f0f-daf3-4642-ac24-1b9a8ffb1530" Jan 29 11:51:51.445869 containerd[2021]: time="2025-01-29T11:51:51.444244300Z" level=error msg="StopPodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" failed" error="failed to destroy network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:51.446066 kubelet[3506]: E0129 11:51:51.445364 3506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:51:51.446066 kubelet[3506]: E0129 11:51:51.445465 3506 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2"} Jan 29 11:51:51.446066 kubelet[3506]: E0129 11:51:51.445571 3506 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"caa493ea-026d-43ba-9a30-a9a39462916f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:51:51.446066 kubelet[3506]: E0129 11:51:51.445643 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"caa493ea-026d-43ba-9a30-a9a39462916f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" podUID="caa493ea-026d-43ba-9a30-a9a39462916f" Jan 29 11:51:51.449800 containerd[2021]: time="2025-01-29T11:51:51.449694844Z" level=error msg="StopPodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" failed" error="failed to destroy network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:51.450579 kubelet[3506]: E0129 11:51:51.450003 3506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:51:51.450579 kubelet[3506]: E0129 11:51:51.450092 3506 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9"} Jan 29 11:51:51.450579 kubelet[3506]: E0129 11:51:51.450231 3506 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31697fa8-c2a5-4e89-a21e-74d67fa947f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:51:51.452092 kubelet[3506]: E0129 11:51:51.450274 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31697fa8-c2a5-4e89-a21e-74d67fa947f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" podUID="31697fa8-c2a5-4e89-a21e-74d67fa947f6" Jan 29 11:51:51.462851 containerd[2021]: time="2025-01-29T11:51:51.462735004Z" level=error msg="StopPodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" failed" error="failed to destroy network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:51.463356 kubelet[3506]: E0129 11:51:51.463295 3506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:51:51.463623 kubelet[3506]: E0129 11:51:51.463572 3506 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc"} Jan 29 11:51:51.463846 kubelet[3506]: E0129 11:51:51.463791 3506 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a019af1-803b-44eb-a929-7256065a6820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:51:51.464242 kubelet[3506]: E0129 11:51:51.464158 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a019af1-803b-44eb-a929-7256065a6820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" podUID="9a019af1-803b-44eb-a929-7256065a6820" Jan 29 11:51:51.468381 containerd[2021]: time="2025-01-29T11:51:51.468164248Z" level=error msg="StopPodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" failed" error="failed to destroy network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:51:51.469853 kubelet[3506]: E0129 11:51:51.469497 3506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:51:51.469853 kubelet[3506]: E0129 11:51:51.469573 3506 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad"} Jan 29 11:51:51.469853 kubelet[3506]: E0129 11:51:51.469630 3506 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:51:51.469853 kubelet[3506]: E0129 11:51:51.469677 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bmlgf" podUID="b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2" Jan 29 11:51:58.239997 kubelet[3506]: I0129 11:51:58.238535 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:51:58.731281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405898660.mount: Deactivated successfully. Jan 29 11:51:58.816420 containerd[2021]: time="2025-01-29T11:51:58.816280861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:58.818550 containerd[2021]: time="2025-01-29T11:51:58.818427757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 11:51:58.821061 containerd[2021]: time="2025-01-29T11:51:58.820959589Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:58.827615 containerd[2021]: time="2025-01-29T11:51:58.827498137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:51:58.829219 containerd[2021]: time="2025-01-29T11:51:58.828934093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.607908107s" Jan 29 11:51:58.829219 containerd[2021]: time="2025-01-29T11:51:58.829008829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 11:51:58.873030 containerd[2021]: time="2025-01-29T11:51:58.872709781Z" level=info msg="CreateContainer within sandbox \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:51:58.909919 containerd[2021]: time="2025-01-29T11:51:58.909839917Z" level=info msg="CreateContainer within sandbox \"821ca6093d9376a80e37b067396cce015c09f854126ed8907139c590aa5cabb6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9e0d3b870c70d3d7ba82a09b3e4706723caa996277d3f4f6ae45c324dcf63b69\"" Jan 29 11:51:58.912163 containerd[2021]: time="2025-01-29T11:51:58.912044389Z" level=info msg="StartContainer for \"9e0d3b870c70d3d7ba82a09b3e4706723caa996277d3f4f6ae45c324dcf63b69\"" Jan 29 11:51:58.970902 systemd[1]: Started cri-containerd-9e0d3b870c70d3d7ba82a09b3e4706723caa996277d3f4f6ae45c324dcf63b69.scope - libcontainer container 9e0d3b870c70d3d7ba82a09b3e4706723caa996277d3f4f6ae45c324dcf63b69. Jan 29 11:51:59.055138 containerd[2021]: time="2025-01-29T11:51:59.053894890Z" level=info msg="StartContainer for \"9e0d3b870c70d3d7ba82a09b3e4706723caa996277d3f4f6ae45c324dcf63b69\" returns successfully" Jan 29 11:51:59.197867 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:51:59.198050 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:52:01.534114 kernel: bpftool[4783]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:52:01.871158 systemd-networkd[1931]: vxlan.calico: Link UP Jan 29 11:52:01.871177 systemd-networkd[1931]: vxlan.calico: Gained carrier Jan 29 11:52:01.878179 (udev-worker)[4593]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:52:01.924132 (udev-worker)[4594]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:52:01.936167 containerd[2021]: time="2025-01-29T11:52:01.934670188Z" level=info msg="StopPodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\"" Jan 29 11:52:02.205826 kubelet[3506]: I0129 11:52:02.204903 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zsl8s" podStartSLOduration=5.078384173 podStartE2EDuration="26.204870434s" podCreationTimestamp="2025-01-29 11:51:36 +0000 UTC" firstStartedPulling="2025-01-29 11:51:37.70453 +0000 UTC m=+14.982695739" lastFinishedPulling="2025-01-29 11:51:58.831016261 +0000 UTC m=+36.109182000" observedRunningTime="2025-01-29 11:51:59.382928856 +0000 UTC m=+36.661094619" watchObservedRunningTime="2025-01-29 11:52:02.204870434 +0000 UTC m=+39.483036209" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.204 [INFO][4837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.205 [INFO][4837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" iface="eth0" netns="/var/run/netns/cni-11aeb4f1-d349-ec09-baef-d0638bee1b7b" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.209 [INFO][4837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" iface="eth0" netns="/var/run/netns/cni-11aeb4f1-d349-ec09-baef-d0638bee1b7b" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.211 [INFO][4837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" iface="eth0" netns="/var/run/netns/cni-11aeb4f1-d349-ec09-baef-d0638bee1b7b" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.211 [INFO][4837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.211 [INFO][4837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.286 [INFO][4846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.289 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.289 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.304 [WARNING][4846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.304 [INFO][4846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.307 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:02.319567 containerd[2021]: 2025-01-29 11:52:02.313 [INFO][4837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:02.325794 containerd[2021]: time="2025-01-29T11:52:02.325715030Z" level=info msg="TearDown network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" successfully" Jan 29 11:52:02.325794 containerd[2021]: time="2025-01-29T11:52:02.325782218Z" level=info msg="StopPodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" returns successfully" Jan 29 11:52:02.327275 containerd[2021]: time="2025-01-29T11:52:02.327205406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-nfxg2,Uid:9a019af1-803b-44eb-a929-7256065a6820,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:52:02.328679 systemd[1]: run-netns-cni\x2d11aeb4f1\x2dd349\x2dec09\x2dbaef\x2dd0638bee1b7b.mount: Deactivated successfully. Jan 29 11:52:02.604502 systemd-networkd[1931]: cali50dd3384ae3: Link UP Jan 29 11:52:02.604966 systemd-networkd[1931]: cali50dd3384ae3: Gained carrier Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.470 [INFO][4875] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0 calico-apiserver-9c789bfd7- calico-apiserver 9a019af1-803b-44eb-a929-7256065a6820 776 0 2025-01-29 11:51:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c789bfd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-252 calico-apiserver-9c789bfd7-nfxg2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali50dd3384ae3 [] []}} ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.470 [INFO][4875] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.522 [INFO][4892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" HandleID="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.541 [INFO][4892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" HandleID="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000222ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-252", "pod":"calico-apiserver-9c789bfd7-nfxg2", "timestamp":"2025-01-29 11:52:02.522791847 +0000 UTC"}, Hostname:"ip-172-31-25-252", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.541 [INFO][4892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.541 [INFO][4892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.541 [INFO][4892] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-252' Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.544 [INFO][4892] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.551 [INFO][4892] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.559 [INFO][4892] ipam/ipam.go 489: Trying affinity for 192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.562 [INFO][4892] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.566 [INFO][4892] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.567 [INFO][4892] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.569 [INFO][4892] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863 Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.578 [INFO][4892] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.587 [INFO][4892] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.65/26] block=192.168.91.64/26 handle="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.587 [INFO][4892] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.65/26] handle="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" host="ip-172-31-25-252" Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.588 [INFO][4892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:02.641220 containerd[2021]: 2025-01-29 11:52:02.588 [INFO][4892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.65/26] IPv6=[] ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" HandleID="k8s-pod-network.72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.643026 containerd[2021]: 2025-01-29 11:52:02.592 [INFO][4875] cni-plugin/k8s.go 386: Populated endpoint ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a019af1-803b-44eb-a929-7256065a6820", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"", Pod:"calico-apiserver-9c789bfd7-nfxg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50dd3384ae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:02.643026 containerd[2021]: 2025-01-29 11:52:02.592 [INFO][4875] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.65/32] ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.643026 containerd[2021]: 2025-01-29 11:52:02.592 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50dd3384ae3 ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.643026 containerd[2021]: 2025-01-29 11:52:02.606 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.643026 containerd[2021]: 2025-01-29 11:52:02.610 [INFO][4875] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a019af1-803b-44eb-a929-7256065a6820", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863", Pod:"calico-apiserver-9c789bfd7-nfxg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50dd3384ae3", MAC:"f6:9d:44:d6:03:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:02.643026 containerd[2021]: 2025-01-29 11:52:02.631 [INFO][4875] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-nfxg2" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:02.691156 containerd[2021]: time="2025-01-29T11:52:02.690685120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:02.691156 containerd[2021]: time="2025-01-29T11:52:02.690810460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:02.692734 containerd[2021]: time="2025-01-29T11:52:02.692192224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:02.692734 containerd[2021]: time="2025-01-29T11:52:02.692420212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:02.744429 systemd[1]: Started cri-containerd-72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863.scope - libcontainer container 72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863. Jan 29 11:52:02.814722 containerd[2021]: time="2025-01-29T11:52:02.814510121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-nfxg2,Uid:9a019af1-803b-44eb-a929-7256065a6820,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863\"" Jan 29 11:52:02.819270 containerd[2021]: time="2025-01-29T11:52:02.818872661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:52:02.938795 containerd[2021]: time="2025-01-29T11:52:02.937870181Z" level=info msg="StopPodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\"" Jan 29 11:52:02.945288 containerd[2021]: time="2025-01-29T11:52:02.941392493Z" level=info msg="StopPodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\"" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.123 [INFO][4981] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.126 [INFO][4981] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" iface="eth0" netns="/var/run/netns/cni-25feb810-7883-0134-584c-b10d16ac2bd8" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.126 [INFO][4981] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" iface="eth0" netns="/var/run/netns/cni-25feb810-7883-0134-584c-b10d16ac2bd8" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.127 [INFO][4981] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" iface="eth0" netns="/var/run/netns/cni-25feb810-7883-0134-584c-b10d16ac2bd8" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.127 [INFO][4981] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.127 [INFO][4981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.186 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.187 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.187 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.226 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.226 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.237 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:03.259140 containerd[2021]: 2025-01-29 11:52:03.252 [INFO][4981] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:03.275224 containerd[2021]: time="2025-01-29T11:52:03.270449211Z" level=info msg="TearDown network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" successfully" Jan 29 11:52:03.275224 containerd[2021]: time="2025-01-29T11:52:03.272451735Z" level=info msg="StopPodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" returns successfully" Jan 29 11:52:03.276166 systemd[1]: run-netns-cni\x2d25feb810\x2d7883\x2d0134\x2d584c\x2db10d16ac2bd8.mount: Deactivated successfully. Jan 29 11:52:03.280054 containerd[2021]: time="2025-01-29T11:52:03.279963999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqdbw,Uid:317d0f0f-daf3-4642-ac24-1b9a8ffb1530,Namespace:kube-system,Attempt:1,}" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.115 [INFO][4980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.116 [INFO][4980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" iface="eth0" netns="/var/run/netns/cni-ffcd9cf2-f076-e093-ea3f-48e07d2caff5" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.118 [INFO][4980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" iface="eth0" netns="/var/run/netns/cni-ffcd9cf2-f076-e093-ea3f-48e07d2caff5" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.121 [INFO][4980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" iface="eth0" netns="/var/run/netns/cni-ffcd9cf2-f076-e093-ea3f-48e07d2caff5" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.121 [INFO][4980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.121 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.240 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.241 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.241 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.257 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.257 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.265 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:03.286823 containerd[2021]: 2025-01-29 11:52:03.270 [INFO][4980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:03.289140 containerd[2021]: time="2025-01-29T11:52:03.288839475Z" level=info msg="TearDown network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" successfully" Jan 29 11:52:03.289140 containerd[2021]: time="2025-01-29T11:52:03.288901983Z" level=info msg="StopPodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" returns successfully" Jan 29 11:52:03.297146 containerd[2021]: time="2025-01-29T11:52:03.294995727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fksp9,Uid:0469310f-6c9c-4d38-9ef3-1ec8ed658901,Namespace:kube-system,Attempt:1,}" Jan 29 11:52:03.336419 systemd[1]: run-netns-cni\x2dffcd9cf2\x2df076\x2de093\x2dea3f\x2d48e07d2caff5.mount: Deactivated successfully. Jan 29 11:52:03.719558 systemd-networkd[1931]: calia503447e7e9: Link UP Jan 29 11:52:03.722670 systemd-networkd[1931]: calia503447e7e9: Gained carrier Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.547 [INFO][5007] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0 coredns-668d6bf9bc- kube-system 317d0f0f-daf3-4642-ac24-1b9a8ffb1530 785 0 2025-01-29 11:51:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-252 coredns-668d6bf9bc-mqdbw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia503447e7e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.547 [INFO][5007] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.624 [INFO][5031] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" HandleID="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.653 [INFO][5031] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" HandleID="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d760), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-252", "pod":"coredns-668d6bf9bc-mqdbw", "timestamp":"2025-01-29 11:52:03.624912065 +0000 UTC"}, Hostname:"ip-172-31-25-252", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.653 [INFO][5031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.653 [INFO][5031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.653 [INFO][5031] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-252' Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.658 [INFO][5031] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.668 [INFO][5031] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.677 [INFO][5031] ipam/ipam.go 489: Trying affinity for 192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.681 [INFO][5031] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.685 [INFO][5031] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.685 [INFO][5031] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.688 [INFO][5031] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667 Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.697 [INFO][5031] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.707 [INFO][5031] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.66/26] block=192.168.91.64/26 handle="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.707 [INFO][5031] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.66/26] handle="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" host="ip-172-31-25-252" Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.707 [INFO][5031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:03.758295 containerd[2021]: 2025-01-29 11:52:03.707 [INFO][5031] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.66/26] IPv6=[] ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" HandleID="k8s-pod-network.0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.762487 containerd[2021]: 2025-01-29 11:52:03.712 [INFO][5007] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"317d0f0f-daf3-4642-ac24-1b9a8ffb1530", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"", Pod:"coredns-668d6bf9bc-mqdbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia503447e7e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:03.762487 containerd[2021]: 2025-01-29 11:52:03.712 [INFO][5007] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.66/32] ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.762487 containerd[2021]: 2025-01-29 11:52:03.712 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia503447e7e9 ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.762487 containerd[2021]: 2025-01-29 11:52:03.722 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.762487 containerd[2021]: 2025-01-29 11:52:03.724 [INFO][5007] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"317d0f0f-daf3-4642-ac24-1b9a8ffb1530", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667", Pod:"coredns-668d6bf9bc-mqdbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia503447e7e9", MAC:"26:bc:60:a5:38:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:03.762487 containerd[2021]: 2025-01-29 11:52:03.752 [INFO][5007] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqdbw" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:03.793419 systemd-networkd[1931]: vxlan.calico: Gained IPv6LL Jan 29 11:52:03.836802 containerd[2021]: time="2025-01-29T11:52:03.836545434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:03.836802 containerd[2021]: time="2025-01-29T11:52:03.836714274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:03.836802 containerd[2021]: time="2025-01-29T11:52:03.836744862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:03.837657 containerd[2021]: time="2025-01-29T11:52:03.837511974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:03.856399 systemd-networkd[1931]: cali50dd3384ae3: Gained IPv6LL Jan 29 11:52:03.888453 systemd-networkd[1931]: calicec22f3c0d7: Link UP Jan 29 11:52:03.894197 systemd-networkd[1931]: calicec22f3c0d7: Gained carrier Jan 29 11:52:03.912493 systemd[1]: Started cri-containerd-0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667.scope - libcontainer container 0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667. Jan 29 11:52:03.938787 containerd[2021]: time="2025-01-29T11:52:03.937440918Z" level=info msg="StopPodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\"" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.567 [INFO][5016] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0 coredns-668d6bf9bc- kube-system 0469310f-6c9c-4d38-9ef3-1ec8ed658901 784 0 2025-01-29 11:51:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-252 coredns-668d6bf9bc-fksp9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicec22f3c0d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.568 [INFO][5016] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.648 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" HandleID="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.671 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" HandleID="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316c00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-252", "pod":"coredns-668d6bf9bc-fksp9", "timestamp":"2025-01-29 11:52:03.648304409 +0000 UTC"}, Hostname:"ip-172-31-25-252", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.671 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.707 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.708 [INFO][5035] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-252' Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.761 [INFO][5035] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.780 [INFO][5035] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.798 [INFO][5035] ipam/ipam.go 489: Trying affinity for 192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.813 [INFO][5035] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.821 [INFO][5035] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.822 [INFO][5035] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.828 [INFO][5035] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46 Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.842 [INFO][5035] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.859 [INFO][5035] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.67/26] block=192.168.91.64/26 handle="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.861 [INFO][5035] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.67/26] handle="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" host="ip-172-31-25-252" Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.861 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:03.941552 containerd[2021]: 2025-01-29 11:52:03.861 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.67/26] IPv6=[] ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" HandleID="k8s-pod-network.b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.942848 containerd[2021]: 2025-01-29 11:52:03.871 [INFO][5016] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0469310f-6c9c-4d38-9ef3-1ec8ed658901", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"", Pod:"coredns-668d6bf9bc-fksp9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicec22f3c0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:03.942848 containerd[2021]: 2025-01-29 11:52:03.871 [INFO][5016] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.67/32] ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.942848 containerd[2021]: 2025-01-29 11:52:03.871 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicec22f3c0d7 ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.942848 containerd[2021]: 2025-01-29 11:52:03.886 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:03.942848 containerd[2021]: 2025-01-29 11:52:03.887 [INFO][5016] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0469310f-6c9c-4d38-9ef3-1ec8ed658901", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46", Pod:"coredns-668d6bf9bc-fksp9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicec22f3c0d7", MAC:"56:fb:8f:f7:8e:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:03.942848 containerd[2021]: 2025-01-29 11:52:03.924 [INFO][5016] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46" Namespace="kube-system" Pod="coredns-668d6bf9bc-fksp9" WorkloadEndpoint="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:04.069193 containerd[2021]: time="2025-01-29T11:52:04.067116423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:04.069193 containerd[2021]: time="2025-01-29T11:52:04.067260027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:04.069193 containerd[2021]: time="2025-01-29T11:52:04.067306539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:04.069193 containerd[2021]: time="2025-01-29T11:52:04.067518051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:04.158214 systemd[1]: Started cri-containerd-b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46.scope - libcontainer container b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46. Jan 29 11:52:04.174407 containerd[2021]: time="2025-01-29T11:52:04.174243808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqdbw,Uid:317d0f0f-daf3-4642-ac24-1b9a8ffb1530,Namespace:kube-system,Attempt:1,} returns sandbox id \"0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667\"" Jan 29 11:52:04.192130 containerd[2021]: time="2025-01-29T11:52:04.190931116Z" level=info msg="CreateContainer within sandbox \"0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:52:04.237600 containerd[2021]: time="2025-01-29T11:52:04.237412540Z" level=info msg="CreateContainer within sandbox \"0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bce9ca56590c04d4ae407c810cfb11ec1958b0f4e08a1677a197338c65627dad\"" Jan 29 11:52:04.239570 containerd[2021]: time="2025-01-29T11:52:04.239037784Z" level=info msg="StartContainer for \"bce9ca56590c04d4ae407c810cfb11ec1958b0f4e08a1677a197338c65627dad\"" Jan 29 11:52:04.386151 systemd[1]: run-containerd-runc-k8s.io-bce9ca56590c04d4ae407c810cfb11ec1958b0f4e08a1677a197338c65627dad-runc.nPILNm.mount: Deactivated successfully. Jan 29 11:52:04.409412 systemd[1]: Started cri-containerd-bce9ca56590c04d4ae407c810cfb11ec1958b0f4e08a1677a197338c65627dad.scope - libcontainer container bce9ca56590c04d4ae407c810cfb11ec1958b0f4e08a1677a197338c65627dad. Jan 29 11:52:04.480527 containerd[2021]: time="2025-01-29T11:52:04.476776685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fksp9,Uid:0469310f-6c9c-4d38-9ef3-1ec8ed658901,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46\"" Jan 29 11:52:04.493379 containerd[2021]: time="2025-01-29T11:52:04.493292789Z" level=info msg="CreateContainer within sandbox \"b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.258 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.258 [INFO][5118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" iface="eth0" netns="/var/run/netns/cni-57f24549-a7f2-b4cf-2d48-1376f47e05b4" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.259 [INFO][5118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" iface="eth0" netns="/var/run/netns/cni-57f24549-a7f2-b4cf-2d48-1376f47e05b4" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.259 [INFO][5118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" iface="eth0" netns="/var/run/netns/cni-57f24549-a7f2-b4cf-2d48-1376f47e05b4" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.260 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.260 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.420 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.422 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.424 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.465 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.465 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.476 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:04.502360 containerd[2021]: 2025-01-29 11:52:04.489 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:04.503242 containerd[2021]: time="2025-01-29T11:52:04.502564517Z" level=info msg="TearDown network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" successfully" Jan 29 11:52:04.509150 containerd[2021]: time="2025-01-29T11:52:04.507168893Z" level=info msg="StopPodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" returns successfully" Jan 29 11:52:04.511204 containerd[2021]: time="2025-01-29T11:52:04.510458237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855db7d7f9-phjqx,Uid:caa493ea-026d-43ba-9a30-a9a39462916f,Namespace:calico-system,Attempt:1,}" Jan 29 11:52:04.514356 systemd[1]: run-netns-cni\x2d57f24549\x2da7f2\x2db4cf\x2d2d48\x2d1376f47e05b4.mount: Deactivated successfully. Jan 29 11:52:04.577605 containerd[2021]: time="2025-01-29T11:52:04.577482282Z" level=info msg="CreateContainer within sandbox \"b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c98e5c5bc828f89ad633da4f43c4860e5ca30c47adafabc5d5f31f043352e7ef\"" Jan 29 11:52:04.581849 containerd[2021]: time="2025-01-29T11:52:04.581770422Z" level=info msg="StartContainer for \"c98e5c5bc828f89ad633da4f43c4860e5ca30c47adafabc5d5f31f043352e7ef\"" Jan 29 11:52:04.666919 containerd[2021]: time="2025-01-29T11:52:04.666451902Z" level=info msg="StartContainer for \"bce9ca56590c04d4ae407c810cfb11ec1958b0f4e08a1677a197338c65627dad\" returns successfully" Jan 29 11:52:04.761672 systemd[1]: Started cri-containerd-c98e5c5bc828f89ad633da4f43c4860e5ca30c47adafabc5d5f31f043352e7ef.scope - libcontainer container c98e5c5bc828f89ad633da4f43c4860e5ca30c47adafabc5d5f31f043352e7ef. Jan 29 11:52:04.801387 systemd[1]: Started sshd@9-172.31.25.252:22-139.178.89.65:56468.service - OpenSSH per-connection server daemon (139.178.89.65:56468). Jan 29 11:52:05.010255 containerd[2021]: time="2025-01-29T11:52:05.009915880Z" level=info msg="StartContainer for \"c98e5c5bc828f89ad633da4f43c4860e5ca30c47adafabc5d5f31f043352e7ef\" returns successfully" Jan 29 11:52:05.077497 sshd[5246]: Accepted publickey for core from 139.178.89.65 port 56468 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:05.085279 sshd[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:05.112611 systemd-logind[1995]: New session 10 of user core. Jan 29 11:52:05.122508 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:52:05.425096 systemd-networkd[1931]: caliad0fb93d22e: Link UP Jan 29 11:52:05.429371 systemd-networkd[1931]: caliad0fb93d22e: Gained carrier Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:04.848 [INFO][5203] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0 calico-kube-controllers-855db7d7f9- calico-system caa493ea-026d-43ba-9a30-a9a39462916f 798 0 2025-01-29 11:51:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:855db7d7f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-252 calico-kube-controllers-855db7d7f9-phjqx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad0fb93d22e [] []}} ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:04.849 [INFO][5203] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.038 [INFO][5252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" HandleID="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.206 [INFO][5252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" HandleID="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102810), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-252", "pod":"calico-kube-controllers-855db7d7f9-phjqx", "timestamp":"2025-01-29 11:52:05.038512036 +0000 UTC"}, Hostname:"ip-172-31-25-252", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.206 [INFO][5252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.206 [INFO][5252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.207 [INFO][5252] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-252' Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.219 [INFO][5252] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.241 [INFO][5252] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.277 [INFO][5252] ipam/ipam.go 489: Trying affinity for 192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.291 [INFO][5252] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.306 [INFO][5252] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.308 [INFO][5252] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.317 [INFO][5252] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7 Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.340 [INFO][5252] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.376 [INFO][5252] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.68/26] block=192.168.91.64/26 handle="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.378 [INFO][5252] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.68/26] handle="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" host="ip-172-31-25-252" Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.378 [INFO][5252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:05.547047 containerd[2021]: 2025-01-29 11:52:05.378 [INFO][5252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.68/26] IPv6=[] ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" HandleID="k8s-pod-network.568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.548424 containerd[2021]: 2025-01-29 11:52:05.400 [INFO][5203] cni-plugin/k8s.go 386: Populated endpoint ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0", GenerateName:"calico-kube-controllers-855db7d7f9-", Namespace:"calico-system", SelfLink:"", UID:"caa493ea-026d-43ba-9a30-a9a39462916f", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855db7d7f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"", Pod:"calico-kube-controllers-855db7d7f9-phjqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad0fb93d22e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:05.548424 containerd[2021]: 2025-01-29 11:52:05.402 [INFO][5203] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.68/32] ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.548424 containerd[2021]: 2025-01-29 11:52:05.403 [INFO][5203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad0fb93d22e ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.548424 containerd[2021]: 2025-01-29 11:52:05.433 [INFO][5203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.548424 containerd[2021]: 2025-01-29 11:52:05.434 [INFO][5203] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0", GenerateName:"calico-kube-controllers-855db7d7f9-", Namespace:"calico-system", SelfLink:"", UID:"caa493ea-026d-43ba-9a30-a9a39462916f", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855db7d7f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7", Pod:"calico-kube-controllers-855db7d7f9-phjqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad0fb93d22e", MAC:"6a:54:e8:19:6e:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:05.548424 containerd[2021]: 2025-01-29 11:52:05.516 [INFO][5203] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7" Namespace="calico-system" Pod="calico-kube-controllers-855db7d7f9-phjqx" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:05.616388 kubelet[3506]: I0129 11:52:05.616064 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mqdbw" podStartSLOduration=36.616036927 podStartE2EDuration="36.616036927s" podCreationTimestamp="2025-01-29 11:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:52:05.604651039 +0000 UTC m=+42.882816862" watchObservedRunningTime="2025-01-29 11:52:05.616036927 +0000 UTC m=+42.894202690" Jan 29 11:52:05.684752 kubelet[3506]: I0129 11:52:05.684516 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fksp9" podStartSLOduration=36.684134791 podStartE2EDuration="36.684134791s" podCreationTimestamp="2025-01-29 11:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:52:05.681674995 +0000 UTC m=+42.959840746" watchObservedRunningTime="2025-01-29 11:52:05.684134791 +0000 UTC m=+42.962300554" Jan 29 11:52:05.693529 sshd[5246]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:05.707111 systemd[1]: sshd@9-172.31.25.252:22-139.178.89.65:56468.service: Deactivated successfully. Jan 29 11:52:05.715800 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:52:05.732059 systemd-logind[1995]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:52:05.738808 systemd-logind[1995]: Removed session 10. Jan 29 11:52:05.750977 containerd[2021]: time="2025-01-29T11:52:05.747923299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:05.750977 containerd[2021]: time="2025-01-29T11:52:05.748027435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:05.750977 containerd[2021]: time="2025-01-29T11:52:05.748065787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:05.750977 containerd[2021]: time="2025-01-29T11:52:05.748260655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:05.779810 systemd-networkd[1931]: calia503447e7e9: Gained IPv6LL Jan 29 11:52:05.780397 systemd-networkd[1931]: calicec22f3c0d7: Gained IPv6LL Jan 29 11:52:05.883585 systemd[1]: Started cri-containerd-568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7.scope - libcontainer container 568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7. Jan 29 11:52:05.937480 containerd[2021]: time="2025-01-29T11:52:05.936663212Z" level=info msg="StopPodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\"" Jan 29 11:52:06.099136 containerd[2021]: time="2025-01-29T11:52:06.099001721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855db7d7f9-phjqx,Uid:caa493ea-026d-43ba-9a30-a9a39462916f,Namespace:calico-system,Attempt:1,} returns sandbox id \"568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7\"" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.135 [INFO][5362] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.135 [INFO][5362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" iface="eth0" netns="/var/run/netns/cni-430961df-c6d9-19cf-0961-3ce780b8e63c" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.135 [INFO][5362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" iface="eth0" netns="/var/run/netns/cni-430961df-c6d9-19cf-0961-3ce780b8e63c" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.136 [INFO][5362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" iface="eth0" netns="/var/run/netns/cni-430961df-c6d9-19cf-0961-3ce780b8e63c" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.136 [INFO][5362] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.136 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.201 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.201 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.201 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.220 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.220 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.228 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:06.244383 containerd[2021]: 2025-01-29 11:52:06.237 [INFO][5362] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:06.254152 containerd[2021]: time="2025-01-29T11:52:06.249643062Z" level=info msg="TearDown network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" successfully" Jan 29 11:52:06.254152 containerd[2021]: time="2025-01-29T11:52:06.249698094Z" level=info msg="StopPodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" returns successfully" Jan 29 11:52:06.254152 containerd[2021]: time="2025-01-29T11:52:06.253651902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-xd4wr,Uid:31697fa8-c2a5-4e89-a21e-74d67fa947f6,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:52:06.256589 systemd[1]: run-netns-cni\x2d430961df\x2dc6d9\x2d19cf\x2d0961\x2d3ce780b8e63c.mount: Deactivated successfully. Jan 29 11:52:06.695396 systemd-networkd[1931]: cali63ca9f272e9: Link UP Jan 29 11:52:06.697596 systemd-networkd[1931]: cali63ca9f272e9: Gained carrier Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.470 [INFO][5382] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0 calico-apiserver-9c789bfd7- calico-apiserver 31697fa8-c2a5-4e89-a21e-74d67fa947f6 856 0 2025-01-29 11:51:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c789bfd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-252 calico-apiserver-9c789bfd7-xd4wr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63ca9f272e9 [] []}} ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.471 [INFO][5382] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.583 [INFO][5394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" HandleID="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.607 [INFO][5394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" HandleID="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c9c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-252", "pod":"calico-apiserver-9c789bfd7-xd4wr", "timestamp":"2025-01-29 11:52:06.58333196 +0000 UTC"}, Hostname:"ip-172-31-25-252", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.607 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.607 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.607 [INFO][5394] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-252' Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.611 [INFO][5394] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.622 [INFO][5394] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.636 [INFO][5394] ipam/ipam.go 489: Trying affinity for 192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.642 [INFO][5394] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.649 [INFO][5394] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.649 [INFO][5394] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.656 [INFO][5394] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751 Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.665 [INFO][5394] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.681 [INFO][5394] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.69/26] block=192.168.91.64/26 handle="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.681 [INFO][5394] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.69/26] handle="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" host="ip-172-31-25-252" Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.681 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:06.763971 containerd[2021]: 2025-01-29 11:52:06.682 [INFO][5394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.69/26] IPv6=[] ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" HandleID="k8s-pod-network.015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.766930 containerd[2021]: 2025-01-29 11:52:06.686 [INFO][5382] cni-plugin/k8s.go 386: Populated endpoint ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"31697fa8-c2a5-4e89-a21e-74d67fa947f6", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"", Pod:"calico-apiserver-9c789bfd7-xd4wr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63ca9f272e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:06.766930 containerd[2021]: 2025-01-29 11:52:06.686 [INFO][5382] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.69/32] ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.766930 containerd[2021]: 2025-01-29 11:52:06.687 [INFO][5382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63ca9f272e9 ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.766930 containerd[2021]: 2025-01-29 11:52:06.699 [INFO][5382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.766930 containerd[2021]: 2025-01-29 11:52:06.701 [INFO][5382] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"31697fa8-c2a5-4e89-a21e-74d67fa947f6", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751", Pod:"calico-apiserver-9c789bfd7-xd4wr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63ca9f272e9", MAC:"d2:f5:d9:c6:f7:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:06.766930 containerd[2021]: 2025-01-29 11:52:06.751 [INFO][5382] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751" Namespace="calico-apiserver" Pod="calico-apiserver-9c789bfd7-xd4wr" WorkloadEndpoint="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:06.920934 containerd[2021]: time="2025-01-29T11:52:06.920621493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:06.920934 containerd[2021]: time="2025-01-29T11:52:06.920734257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:06.920934 containerd[2021]: time="2025-01-29T11:52:06.920772789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:06.924097 containerd[2021]: time="2025-01-29T11:52:06.920965017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:06.938716 containerd[2021]: time="2025-01-29T11:52:06.938638557Z" level=info msg="StopPodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\"" Jan 29 11:52:07.050529 systemd[1]: Started cri-containerd-015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751.scope - libcontainer container 015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751. Jan 29 11:52:07.121956 systemd-networkd[1931]: caliad0fb93d22e: Gained IPv6LL Jan 29 11:52:07.262645 containerd[2021]: time="2025-01-29T11:52:07.262583695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c789bfd7-xd4wr,Uid:31697fa8-c2a5-4e89-a21e-74d67fa947f6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751\"" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.226 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.226 [INFO][5449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" iface="eth0" netns="/var/run/netns/cni-028e29c7-4181-62a3-f159-765476a41804" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.227 [INFO][5449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" iface="eth0" netns="/var/run/netns/cni-028e29c7-4181-62a3-f159-765476a41804" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.229 [INFO][5449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" iface="eth0" netns="/var/run/netns/cni-028e29c7-4181-62a3-f159-765476a41804" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.229 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.229 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.323 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.324 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.324 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.351 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.351 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.360 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:07.371656 containerd[2021]: 2025-01-29 11:52:07.364 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:07.375737 containerd[2021]: time="2025-01-29T11:52:07.372272551Z" level=info msg="TearDown network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" successfully" Jan 29 11:52:07.375737 containerd[2021]: time="2025-01-29T11:52:07.372316735Z" level=info msg="StopPodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" returns successfully" Jan 29 11:52:07.375737 containerd[2021]: time="2025-01-29T11:52:07.373922551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmlgf,Uid:b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2,Namespace:calico-system,Attempt:1,}" Jan 29 11:52:07.380708 systemd[1]: run-netns-cni\x2d028e29c7\x2d4181\x2d62a3\x2df159\x2d765476a41804.mount: Deactivated successfully. Jan 29 11:52:07.791567 systemd-networkd[1931]: cali0e8c62eb8e8: Link UP Jan 29 11:52:07.794158 systemd-networkd[1931]: cali0e8c62eb8e8: Gained carrier Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.558 [INFO][5482] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0 csi-node-driver- calico-system b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2 866 0 2025-01-29 11:51:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-252 csi-node-driver-bmlgf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0e8c62eb8e8 [] []}} ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.558 [INFO][5482] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.671 [INFO][5497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" HandleID="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.699 [INFO][5497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" HandleID="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003fe280), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-252", "pod":"csi-node-driver-bmlgf", "timestamp":"2025-01-29 11:52:07.671538201 +0000 UTC"}, Hostname:"ip-172-31-25-252", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.700 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.701 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.701 [INFO][5497] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-252' Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.707 [INFO][5497] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.717 [INFO][5497] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.729 [INFO][5497] ipam/ipam.go 489: Trying affinity for 192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.736 [INFO][5497] ipam/ipam.go 155: Attempting to load block cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.741 [INFO][5497] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.91.64/26 host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.741 [INFO][5497] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.91.64/26 handle="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.745 [INFO][5497] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.757 [INFO][5497] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.91.64/26 handle="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.776 [INFO][5497] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.91.70/26] block=192.168.91.64/26 handle="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.778 [INFO][5497] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.91.70/26] handle="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" host="ip-172-31-25-252" Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.778 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:07.846345 containerd[2021]: 2025-01-29 11:52:07.778 [INFO][5497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.70/26] IPv6=[] ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" HandleID="k8s-pod-network.486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.850627 containerd[2021]: 2025-01-29 11:52:07.784 [INFO][5482] cni-plugin/k8s.go 386: Populated endpoint ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"", Pod:"csi-node-driver-bmlgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e8c62eb8e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:07.850627 containerd[2021]: 2025-01-29 11:52:07.784 [INFO][5482] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.91.70/32] ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.850627 containerd[2021]: 2025-01-29 11:52:07.784 [INFO][5482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e8c62eb8e8 ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.850627 containerd[2021]: 2025-01-29 11:52:07.795 [INFO][5482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.850627 containerd[2021]: 2025-01-29 11:52:07.796 [INFO][5482] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d", Pod:"csi-node-driver-bmlgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e8c62eb8e8", MAC:"a6:40:b9:ab:b6:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:07.850627 containerd[2021]: 2025-01-29 11:52:07.829 [INFO][5482] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d" Namespace="calico-system" Pod="csi-node-driver-bmlgf" WorkloadEndpoint="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:07.959103 containerd[2021]: time="2025-01-29T11:52:07.957136594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:07.959103 containerd[2021]: time="2025-01-29T11:52:07.957242686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:07.959103 containerd[2021]: time="2025-01-29T11:52:07.957298198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:07.962224 containerd[2021]: time="2025-01-29T11:52:07.961517698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:08.069915 systemd[1]: Started cri-containerd-486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d.scope - libcontainer container 486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d. Jan 29 11:52:08.088195 containerd[2021]: time="2025-01-29T11:52:08.088124911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:08.090796 containerd[2021]: time="2025-01-29T11:52:08.090698599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 11:52:08.093140 containerd[2021]: time="2025-01-29T11:52:08.092743387Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:08.116039 containerd[2021]: time="2025-01-29T11:52:08.115948243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:08.121720 containerd[2021]: time="2025-01-29T11:52:08.121632931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 5.302687682s" Jan 29 11:52:08.121720 containerd[2021]: time="2025-01-29T11:52:08.121707799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:52:08.137621 containerd[2021]: time="2025-01-29T11:52:08.131185315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:52:08.137621 containerd[2021]: time="2025-01-29T11:52:08.132484615Z" level=info msg="CreateContainer within sandbox \"72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:52:08.181328 containerd[2021]: time="2025-01-29T11:52:08.180813368Z" level=info msg="CreateContainer within sandbox \"72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd6f83f15c4e68b1a50aedd0569accac7a60b2e4322f7afbdcd10df5c583323f\"" Jan 29 11:52:08.183372 containerd[2021]: time="2025-01-29T11:52:08.183316124Z" level=info msg="StartContainer for \"cd6f83f15c4e68b1a50aedd0569accac7a60b2e4322f7afbdcd10df5c583323f\"" Jan 29 11:52:08.213370 containerd[2021]: time="2025-01-29T11:52:08.212100308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmlgf,Uid:b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2,Namespace:calico-system,Attempt:1,} returns sandbox id \"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d\"" Jan 29 11:52:08.276324 systemd[1]: Started cri-containerd-cd6f83f15c4e68b1a50aedd0569accac7a60b2e4322f7afbdcd10df5c583323f.scope - libcontainer container cd6f83f15c4e68b1a50aedd0569accac7a60b2e4322f7afbdcd10df5c583323f. Jan 29 11:52:08.389428 containerd[2021]: time="2025-01-29T11:52:08.389259957Z" level=info msg="StartContainer for \"cd6f83f15c4e68b1a50aedd0569accac7a60b2e4322f7afbdcd10df5c583323f\" returns successfully" Jan 29 11:52:08.632169 kubelet[3506]: I0129 11:52:08.629848 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c789bfd7-nfxg2" podStartSLOduration=26.317765836 podStartE2EDuration="31.62982337s" podCreationTimestamp="2025-01-29 11:51:37 +0000 UTC" firstStartedPulling="2025-01-29 11:52:02.817708745 +0000 UTC m=+40.095874484" lastFinishedPulling="2025-01-29 11:52:08.129766291 +0000 UTC m=+45.407932018" observedRunningTime="2025-01-29 11:52:08.628964206 +0000 UTC m=+45.907129969" watchObservedRunningTime="2025-01-29 11:52:08.62982337 +0000 UTC m=+45.907989109" Jan 29 11:52:08.656510 systemd-networkd[1931]: cali63ca9f272e9: Gained IPv6LL Jan 29 11:52:09.232376 systemd-networkd[1931]: cali0e8c62eb8e8: Gained IPv6LL Jan 29 11:52:09.610850 kubelet[3506]: I0129 11:52:09.610200 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:52:10.693364 containerd[2021]: time="2025-01-29T11:52:10.693281520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:10.697026 containerd[2021]: time="2025-01-29T11:52:10.696946800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 11:52:10.698124 containerd[2021]: time="2025-01-29T11:52:10.697898220Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:10.704509 containerd[2021]: time="2025-01-29T11:52:10.704418444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:10.708947 containerd[2021]: time="2025-01-29T11:52:10.708849060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.577583093s" Jan 29 11:52:10.708947 containerd[2021]: time="2025-01-29T11:52:10.708933588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 11:52:10.717223 containerd[2021]: time="2025-01-29T11:52:10.713752944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:52:10.765997 containerd[2021]: time="2025-01-29T11:52:10.763047084Z" level=info msg="CreateContainer within sandbox \"568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:52:10.797930 systemd[1]: Started sshd@10-172.31.25.252:22-139.178.89.65:56484.service - OpenSSH per-connection server daemon (139.178.89.65:56484). Jan 29 11:52:10.816546 containerd[2021]: time="2025-01-29T11:52:10.816474229Z" level=info msg="CreateContainer within sandbox \"568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a\"" Jan 29 11:52:10.819414 containerd[2021]: time="2025-01-29T11:52:10.818423953Z" level=info msg="StartContainer for \"118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a\"" Jan 29 11:52:10.926115 systemd[1]: Started cri-containerd-118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a.scope - libcontainer container 118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a. Jan 29 11:52:11.039114 sshd[5607]: Accepted publickey for core from 139.178.89.65 port 56484 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:11.047402 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:11.078723 systemd-logind[1995]: New session 11 of user core. Jan 29 11:52:11.086444 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:52:11.124858 containerd[2021]: time="2025-01-29T11:52:11.124421506Z" level=info msg="StartContainer for \"118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a\" returns successfully" Jan 29 11:52:11.130529 containerd[2021]: time="2025-01-29T11:52:11.128542942Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:11.130529 containerd[2021]: time="2025-01-29T11:52:11.128755030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:52:11.142018 containerd[2021]: time="2025-01-29T11:52:11.141826798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 428.001638ms" Jan 29 11:52:11.142018 containerd[2021]: time="2025-01-29T11:52:11.141946498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:52:11.146397 containerd[2021]: time="2025-01-29T11:52:11.146228710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:52:11.153472 containerd[2021]: time="2025-01-29T11:52:11.152534374Z" level=info msg="CreateContainer within sandbox \"015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:52:11.182456 containerd[2021]: time="2025-01-29T11:52:11.182373454Z" level=info msg="CreateContainer within sandbox \"015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee873691e1cf728ca058e54296fec19d156f52f40fd87138f559ae8d945a8d56\"" Jan 29 11:52:11.185135 containerd[2021]: time="2025-01-29T11:52:11.184926946Z" level=info msg="StartContainer for \"ee873691e1cf728ca058e54296fec19d156f52f40fd87138f559ae8d945a8d56\"" Jan 29 11:52:11.316921 systemd[1]: Started cri-containerd-ee873691e1cf728ca058e54296fec19d156f52f40fd87138f559ae8d945a8d56.scope - libcontainer container ee873691e1cf728ca058e54296fec19d156f52f40fd87138f559ae8d945a8d56. Jan 29 11:52:11.481417 sshd[5607]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:11.488614 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:52:11.490914 systemd[1]: sshd@10-172.31.25.252:22-139.178.89.65:56484.service: Deactivated successfully. Jan 29 11:52:11.503546 systemd-logind[1995]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:52:11.508994 systemd-logind[1995]: Removed session 11. Jan 29 11:52:11.603130 containerd[2021]: time="2025-01-29T11:52:11.600774204Z" level=info msg="StartContainer for \"ee873691e1cf728ca058e54296fec19d156f52f40fd87138f559ae8d945a8d56\" returns successfully" Jan 29 11:52:11.708125 kubelet[3506]: I0129 11:52:11.705490 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-855db7d7f9-phjqx" podStartSLOduration=30.10447569 podStartE2EDuration="34.705465877s" podCreationTimestamp="2025-01-29 11:51:37 +0000 UTC" firstStartedPulling="2025-01-29 11:52:06.111304313 +0000 UTC m=+43.389470052" lastFinishedPulling="2025-01-29 11:52:10.712294416 +0000 UTC m=+47.990460239" observedRunningTime="2025-01-29 11:52:11.672855793 +0000 UTC m=+48.951021544" watchObservedRunningTime="2025-01-29 11:52:11.705465877 +0000 UTC m=+48.983631616" Jan 29 11:52:11.844521 kubelet[3506]: I0129 11:52:11.844403 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c789bfd7-xd4wr" podStartSLOduration=30.965915015 podStartE2EDuration="34.844377446s" podCreationTimestamp="2025-01-29 11:51:37 +0000 UTC" firstStartedPulling="2025-01-29 11:52:07.265954267 +0000 UTC m=+44.544119994" lastFinishedPulling="2025-01-29 11:52:11.144416686 +0000 UTC m=+48.422582425" observedRunningTime="2025-01-29 11:52:11.710401861 +0000 UTC m=+48.988568128" watchObservedRunningTime="2025-01-29 11:52:11.844377446 +0000 UTC m=+49.122543185" Jan 29 11:52:11.986161 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.91.64:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.91.64:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::6448:eff:fe95:d21d%4]:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 10 cali50dd3384ae3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 11 calia503447e7e9 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 12 calicec22f3c0d7 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 13 caliad0fb93d22e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 14 cali63ca9f272e9 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 11:52:11.990554 ntpd[1987]: 29 Jan 11:52:11 ntpd[1987]: Listen normally on 15 cali0e8c62eb8e8 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 11:52:11.986296 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::6448:eff:fe95:d21d%4]:123 Jan 29 11:52:11.986380 ntpd[1987]: Listen normally on 10 cali50dd3384ae3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 29 11:52:11.986447 ntpd[1987]: Listen normally on 11 calia503447e7e9 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 11:52:11.986516 ntpd[1987]: Listen normally on 12 calicec22f3c0d7 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 11:52:11.986582 ntpd[1987]: Listen normally on 13 caliad0fb93d22e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 29 11:52:11.986650 ntpd[1987]: Listen normally on 14 cali63ca9f272e9 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 29 11:52:11.986747 ntpd[1987]: Listen normally on 15 cali0e8c62eb8e8 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 29 11:52:12.896135 containerd[2021]: time="2025-01-29T11:52:12.895454379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:12.898887 containerd[2021]: time="2025-01-29T11:52:12.898823619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 11:52:12.902140 containerd[2021]: time="2025-01-29T11:52:12.901517439Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:12.906500 containerd[2021]: time="2025-01-29T11:52:12.906403299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:12.908841 containerd[2021]: time="2025-01-29T11:52:12.907735515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.761442737s" Jan 29 11:52:12.908841 containerd[2021]: time="2025-01-29T11:52:12.907813431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 11:52:12.913751 containerd[2021]: time="2025-01-29T11:52:12.913684815Z" level=info msg="CreateContainer within sandbox \"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:52:12.949198 containerd[2021]: time="2025-01-29T11:52:12.949117611Z" level=info msg="CreateContainer within sandbox \"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"475638d52d3808d22994dec7a9911b4ece4cdf88984d95ceb573ad2c73fe11f8\"" Jan 29 11:52:12.950331 containerd[2021]: time="2025-01-29T11:52:12.950274867Z" level=info msg="StartContainer for \"475638d52d3808d22994dec7a9911b4ece4cdf88984d95ceb573ad2c73fe11f8\"" Jan 29 11:52:13.029478 systemd[1]: Started cri-containerd-475638d52d3808d22994dec7a9911b4ece4cdf88984d95ceb573ad2c73fe11f8.scope - libcontainer container 475638d52d3808d22994dec7a9911b4ece4cdf88984d95ceb573ad2c73fe11f8. Jan 29 11:52:13.161723 containerd[2021]: time="2025-01-29T11:52:13.161543352Z" level=info msg="StartContainer for \"475638d52d3808d22994dec7a9911b4ece4cdf88984d95ceb573ad2c73fe11f8\" returns successfully" Jan 29 11:52:13.164624 containerd[2021]: time="2025-01-29T11:52:13.164551404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:52:14.957203 containerd[2021]: time="2025-01-29T11:52:14.956694281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:14.964838 containerd[2021]: time="2025-01-29T11:52:14.961180625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 11:52:14.965530 containerd[2021]: time="2025-01-29T11:52:14.965474057Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:14.978361 containerd[2021]: time="2025-01-29T11:52:14.978301385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:14.981543 containerd[2021]: time="2025-01-29T11:52:14.981459413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.816827417s" Jan 29 11:52:14.981825 containerd[2021]: time="2025-01-29T11:52:14.981784361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 11:52:14.989587 containerd[2021]: time="2025-01-29T11:52:14.989531537Z" level=info msg="CreateContainer within sandbox \"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:52:15.020164 containerd[2021]: time="2025-01-29T11:52:15.020055901Z" level=info msg="CreateContainer within sandbox \"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b203df6fb04cb0db441830b5188b6211e1d532fcef7278da1f7f7cb4d55aa270\"" Jan 29 11:52:15.021687 containerd[2021]: time="2025-01-29T11:52:15.021639733Z" level=info msg="StartContainer for \"b203df6fb04cb0db441830b5188b6211e1d532fcef7278da1f7f7cb4d55aa270\"" Jan 29 11:52:15.109422 systemd[1]: Started cri-containerd-b203df6fb04cb0db441830b5188b6211e1d532fcef7278da1f7f7cb4d55aa270.scope - libcontainer container b203df6fb04cb0db441830b5188b6211e1d532fcef7278da1f7f7cb4d55aa270. Jan 29 11:52:15.263840 containerd[2021]: time="2025-01-29T11:52:15.263646099Z" level=info msg="StartContainer for \"b203df6fb04cb0db441830b5188b6211e1d532fcef7278da1f7f7cb4d55aa270\" returns successfully" Jan 29 11:52:16.135468 kubelet[3506]: I0129 11:52:16.135342 3506 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:52:16.136219 kubelet[3506]: I0129 11:52:16.136185 3506 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:52:16.539952 systemd[1]: Started sshd@11-172.31.25.252:22-139.178.89.65:55984.service - OpenSSH per-connection server daemon (139.178.89.65:55984). Jan 29 11:52:16.732180 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 55984 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:16.736749 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:16.748806 systemd-logind[1995]: New session 12 of user core. Jan 29 11:52:16.757341 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:52:17.017166 sshd[5812]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:17.022919 systemd[1]: sshd@11-172.31.25.252:22-139.178.89.65:55984.service: Deactivated successfully. Jan 29 11:52:17.029490 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:52:17.035452 systemd-logind[1995]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:52:17.039717 systemd-logind[1995]: Removed session 12. Jan 29 11:52:22.068297 systemd[1]: Started sshd@12-172.31.25.252:22-139.178.89.65:52288.service - OpenSSH per-connection server daemon (139.178.89.65:52288). Jan 29 11:52:22.249572 sshd[5828]: Accepted publickey for core from 139.178.89.65 port 52288 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:22.252389 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:22.261645 systemd-logind[1995]: New session 13 of user core. Jan 29 11:52:22.269415 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:52:22.532350 sshd[5828]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:22.540295 systemd[1]: sshd@12-172.31.25.252:22-139.178.89.65:52288.service: Deactivated successfully. Jan 29 11:52:22.545483 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:52:22.547364 systemd-logind[1995]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:52:22.550770 systemd-logind[1995]: Removed session 13. Jan 29 11:52:22.574648 systemd[1]: Started sshd@13-172.31.25.252:22-139.178.89.65:52298.service - OpenSSH per-connection server daemon (139.178.89.65:52298). Jan 29 11:52:22.766714 sshd[5848]: Accepted publickey for core from 139.178.89.65 port 52298 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:22.770509 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:22.782557 systemd-logind[1995]: New session 14 of user core. Jan 29 11:52:22.789466 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:52:22.948155 containerd[2021]: time="2025-01-29T11:52:22.948046093Z" level=info msg="StopPodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\"" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.088 [WARNING][5869] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0469310f-6c9c-4d38-9ef3-1ec8ed658901", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46", Pod:"coredns-668d6bf9bc-fksp9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicec22f3c0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.089 [INFO][5869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.089 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" iface="eth0" netns="" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.089 [INFO][5869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.089 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.171 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.171 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.172 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.189 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.189 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.194 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:23.202469 containerd[2021]: 2025-01-29 11:52:23.198 [INFO][5869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.202469 containerd[2021]: time="2025-01-29T11:52:23.202167346Z" level=info msg="TearDown network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" successfully" Jan 29 11:52:23.202469 containerd[2021]: time="2025-01-29T11:52:23.202204102Z" level=info msg="StopPodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" returns successfully" Jan 29 11:52:23.206966 containerd[2021]: time="2025-01-29T11:52:23.205165150Z" level=info msg="RemovePodSandbox for \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\"" Jan 29 11:52:23.206966 containerd[2021]: time="2025-01-29T11:52:23.205262494Z" level=info msg="Forcibly stopping sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\"" Jan 29 11:52:23.215385 sshd[5848]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:23.231407 systemd[1]: sshd@13-172.31.25.252:22-139.178.89.65:52298.service: Deactivated successfully. Jan 29 11:52:23.240652 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:52:23.248488 systemd-logind[1995]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:52:23.272781 systemd[1]: Started sshd@14-172.31.25.252:22-139.178.89.65:52302.service - OpenSSH per-connection server daemon (139.178.89.65:52302). Jan 29 11:52:23.276667 systemd-logind[1995]: Removed session 14. Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.344 [WARNING][5894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0469310f-6c9c-4d38-9ef3-1ec8ed658901", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"b4f9e1d34882c6c3be4a8066cff724e45749fb51c0ffaadf3b0add41f4662f46", Pod:"coredns-668d6bf9bc-fksp9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicec22f3c0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.345 [INFO][5894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.345 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" iface="eth0" netns="" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.345 [INFO][5894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.345 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.384 [INFO][5905] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.384 [INFO][5905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.384 [INFO][5905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.397 [WARNING][5905] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.397 [INFO][5905] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" HandleID="k8s-pod-network.b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--fksp9-eth0" Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.403 [INFO][5905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:23.408786 containerd[2021]: 2025-01-29 11:52:23.405 [INFO][5894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817" Jan 29 11:52:23.409771 containerd[2021]: time="2025-01-29T11:52:23.408830111Z" level=info msg="TearDown network for sandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" successfully" Jan 29 11:52:23.416846 containerd[2021]: time="2025-01-29T11:52:23.416756135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:52:23.416995 containerd[2021]: time="2025-01-29T11:52:23.416889851Z" level=info msg="RemovePodSandbox \"b01b0d8eafd4c83deaec2fd8ff6513dee8b2b98ce9c1d5c6d821c3f8a090e817\" returns successfully" Jan 29 11:52:23.417881 containerd[2021]: time="2025-01-29T11:52:23.417827147Z" level=info msg="StopPodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\"" Jan 29 11:52:23.481280 sshd[5901]: Accepted publickey for core from 139.178.89.65 port 52302 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:23.485748 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:23.501681 systemd-logind[1995]: New session 15 of user core. Jan 29 11:52:23.509406 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.493 [WARNING][5924] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a019af1-803b-44eb-a929-7256065a6820", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863", Pod:"calico-apiserver-9c789bfd7-nfxg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50dd3384ae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.493 [INFO][5924] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.493 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" iface="eth0" netns="" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.493 [INFO][5924] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.493 [INFO][5924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.549 [INFO][5930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.549 [INFO][5930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.549 [INFO][5930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.568 [WARNING][5930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.568 [INFO][5930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.572 [INFO][5930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:23.579291 containerd[2021]: 2025-01-29 11:52:23.576 [INFO][5924] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.580559 containerd[2021]: time="2025-01-29T11:52:23.579332400Z" level=info msg="TearDown network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" successfully" Jan 29 11:52:23.580559 containerd[2021]: time="2025-01-29T11:52:23.579370488Z" level=info msg="StopPodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" returns successfully" Jan 29 11:52:23.581892 containerd[2021]: time="2025-01-29T11:52:23.581404464Z" level=info msg="RemovePodSandbox for \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\"" Jan 29 11:52:23.581892 containerd[2021]: time="2025-01-29T11:52:23.581460120Z" level=info msg="Forcibly stopping sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\"" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.674 [WARNING][5949] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a019af1-803b-44eb-a929-7256065a6820", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"72b3c3e1400356dfc2c7c9165a7018d7631f011413009f7db8e00b3d40062863", Pod:"calico-apiserver-9c789bfd7-nfxg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali50dd3384ae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.676 [INFO][5949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.676 [INFO][5949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" iface="eth0" netns="" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.676 [INFO][5949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.676 [INFO][5949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.744 [INFO][5963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.746 [INFO][5963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.748 [INFO][5963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.763 [WARNING][5963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.763 [INFO][5963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" HandleID="k8s-pod-network.b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--nfxg2-eth0" Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.767 [INFO][5963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:23.775707 containerd[2021]: 2025-01-29 11:52:23.772 [INFO][5949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc" Jan 29 11:52:23.777444 containerd[2021]: time="2025-01-29T11:52:23.775779589Z" level=info msg="TearDown network for sandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" successfully" Jan 29 11:52:23.783955 containerd[2021]: time="2025-01-29T11:52:23.783657457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:52:23.783955 containerd[2021]: time="2025-01-29T11:52:23.783782233Z" level=info msg="RemovePodSandbox \"b778ccce710104c9ae2297ce6e10fa271d4d11a38076bb2c59ea5e860dafcecc\" returns successfully" Jan 29 11:52:23.786936 containerd[2021]: time="2025-01-29T11:52:23.786759433Z" level=info msg="StopPodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\"" Jan 29 11:52:23.819031 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:23.827418 systemd[1]: sshd@14-172.31.25.252:22-139.178.89.65:52302.service: Deactivated successfully. Jan 29 11:52:23.832627 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:52:23.836428 systemd-logind[1995]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:52:23.839587 systemd-logind[1995]: Removed session 15. Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.869 [WARNING][5981] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"31697fa8-c2a5-4e89-a21e-74d67fa947f6", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751", Pod:"calico-apiserver-9c789bfd7-xd4wr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63ca9f272e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.870 [INFO][5981] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.870 [INFO][5981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" iface="eth0" netns="" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.870 [INFO][5981] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.870 [INFO][5981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.905 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.905 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.906 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.917 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.917 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.920 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:23.925580 containerd[2021]: 2025-01-29 11:52:23.922 [INFO][5981] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:23.926736 containerd[2021]: time="2025-01-29T11:52:23.926201486Z" level=info msg="TearDown network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" successfully" Jan 29 11:52:23.926736 containerd[2021]: time="2025-01-29T11:52:23.926245226Z" level=info msg="StopPodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" returns successfully" Jan 29 11:52:23.926998 containerd[2021]: time="2025-01-29T11:52:23.926943314Z" level=info msg="RemovePodSandbox for \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\"" Jan 29 11:52:23.927060 containerd[2021]: time="2025-01-29T11:52:23.927002630Z" level=info msg="Forcibly stopping sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\"" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:23.990 [WARNING][6007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0", GenerateName:"calico-apiserver-9c789bfd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"31697fa8-c2a5-4e89-a21e-74d67fa947f6", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c789bfd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"015d100ea5347c46f63eaa6aa733ccf1b9ab3964326b5719685d4dab0832c751", Pod:"calico-apiserver-9c789bfd7-xd4wr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63ca9f272e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:23.991 [INFO][6007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:23.991 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" iface="eth0" netns="" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:23.991 [INFO][6007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:23.991 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.031 [INFO][6014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.031 [INFO][6014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.031 [INFO][6014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.045 [WARNING][6014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.046 [INFO][6014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" HandleID="k8s-pod-network.322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Workload="ip--172--31--25--252-k8s-calico--apiserver--9c789bfd7--xd4wr-eth0" Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.048 [INFO][6014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.056924 containerd[2021]: 2025-01-29 11:52:24.052 [INFO][6007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9" Jan 29 11:52:24.060689 containerd[2021]: time="2025-01-29T11:52:24.057039238Z" level=info msg="TearDown network for sandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" successfully" Jan 29 11:52:24.082486 containerd[2021]: time="2025-01-29T11:52:24.082196806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:52:24.082486 containerd[2021]: time="2025-01-29T11:52:24.082304782Z" level=info msg="RemovePodSandbox \"322c396f1f523888df5cf564182504aa61ea94574562fbd11e1e1bd922cbc4e9\" returns successfully" Jan 29 11:52:24.083569 containerd[2021]: time="2025-01-29T11:52:24.083133994Z" level=info msg="StopPodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\"" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.160 [WARNING][6032] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"317d0f0f-daf3-4642-ac24-1b9a8ffb1530", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667", Pod:"coredns-668d6bf9bc-mqdbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia503447e7e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.160 [INFO][6032] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.160 [INFO][6032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" iface="eth0" netns="" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.160 [INFO][6032] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.160 [INFO][6032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.199 [INFO][6038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.199 [INFO][6038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.199 [INFO][6038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.213 [WARNING][6038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.213 [INFO][6038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.216 [INFO][6038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.222068 containerd[2021]: 2025-01-29 11:52:24.219 [INFO][6032] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.223257 containerd[2021]: time="2025-01-29T11:52:24.223023923Z" level=info msg="TearDown network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" successfully" Jan 29 11:52:24.223257 containerd[2021]: time="2025-01-29T11:52:24.223107527Z" level=info msg="StopPodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" returns successfully" Jan 29 11:52:24.224382 containerd[2021]: time="2025-01-29T11:52:24.224327411Z" level=info msg="RemovePodSandbox for \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\"" Jan 29 11:52:24.224382 containerd[2021]: time="2025-01-29T11:52:24.224384591Z" level=info msg="Forcibly stopping sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\"" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.289 [WARNING][6056] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"317d0f0f-daf3-4642-ac24-1b9a8ffb1530", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"0cd7d3be908c4c94fa0c5c31981a6fa0cd7d4aa8d2badd6c1fb9e182cce4f667", Pod:"coredns-668d6bf9bc-mqdbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia503447e7e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.289 [INFO][6056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.289 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" iface="eth0" netns="" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.289 [INFO][6056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.289 [INFO][6056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.324 [INFO][6062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.324 [INFO][6062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.325 [INFO][6062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.338 [WARNING][6062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.339 [INFO][6062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" HandleID="k8s-pod-network.d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Workload="ip--172--31--25--252-k8s-coredns--668d6bf9bc--mqdbw-eth0" Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.341 [INFO][6062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.346332 containerd[2021]: 2025-01-29 11:52:24.343 [INFO][6056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998" Jan 29 11:52:24.346332 containerd[2021]: time="2025-01-29T11:52:24.346249404Z" level=info msg="TearDown network for sandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" successfully" Jan 29 11:52:24.352810 containerd[2021]: time="2025-01-29T11:52:24.352683636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:52:24.352810 containerd[2021]: time="2025-01-29T11:52:24.352781340Z" level=info msg="RemovePodSandbox \"d4d112b8f75cf8db4418d2d6ed67941805eb29730ee8e869219dc6f1b21aa998\" returns successfully" Jan 29 11:52:24.354032 containerd[2021]: time="2025-01-29T11:52:24.353656968Z" level=info msg="StopPodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\"" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.421 [WARNING][6080] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d", Pod:"csi-node-driver-bmlgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e8c62eb8e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.421 [INFO][6080] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.421 [INFO][6080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" iface="eth0" netns="" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.421 [INFO][6080] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.422 [INFO][6080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.464 [INFO][6086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.464 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.464 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.478 [WARNING][6086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.479 [INFO][6086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.481 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.486627 containerd[2021]: 2025-01-29 11:52:24.484 [INFO][6080] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.488123 containerd[2021]: time="2025-01-29T11:52:24.486733476Z" level=info msg="TearDown network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" successfully" Jan 29 11:52:24.488123 containerd[2021]: time="2025-01-29T11:52:24.486820981Z" level=info msg="StopPodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" returns successfully" Jan 29 11:52:24.488123 containerd[2021]: time="2025-01-29T11:52:24.487685893Z" level=info msg="RemovePodSandbox for \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\"" Jan 29 11:52:24.488123 containerd[2021]: time="2025-01-29T11:52:24.487730749Z" level=info msg="Forcibly stopping sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\"" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.554 [WARNING][6104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b30c1856-9cdc-4b5b-8c49-553cd3b8a9f2", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"486103f9884f6af6d06d38e4d63573b381584c3f17a98e5a3f6ddb54894dc87d", Pod:"csi-node-driver-bmlgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e8c62eb8e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.554 [INFO][6104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.554 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" iface="eth0" netns="" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.554 [INFO][6104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.554 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.597 [INFO][6111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.598 [INFO][6111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.598 [INFO][6111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.617 [WARNING][6111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.617 [INFO][6111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" HandleID="k8s-pod-network.9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Workload="ip--172--31--25--252-k8s-csi--node--driver--bmlgf-eth0" Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.620 [INFO][6111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.625711 containerd[2021]: 2025-01-29 11:52:24.623 [INFO][6104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad" Jan 29 11:52:24.625711 containerd[2021]: time="2025-01-29T11:52:24.625689973Z" level=info msg="TearDown network for sandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" successfully" Jan 29 11:52:24.632353 containerd[2021]: time="2025-01-29T11:52:24.632222245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:52:24.632522 containerd[2021]: time="2025-01-29T11:52:24.632408197Z" level=info msg="RemovePodSandbox \"9316312d4e63aca2ed5db109f62e9b81714ed81b19608247894ac1a0fa7ad0ad\" returns successfully" Jan 29 11:52:24.633737 containerd[2021]: time="2025-01-29T11:52:24.633624169Z" level=info msg="StopPodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\"" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.722 [WARNING][6129] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0", GenerateName:"calico-kube-controllers-855db7d7f9-", Namespace:"calico-system", SelfLink:"", UID:"caa493ea-026d-43ba-9a30-a9a39462916f", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855db7d7f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7", Pod:"calico-kube-controllers-855db7d7f9-phjqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad0fb93d22e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.722 [INFO][6129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.722 [INFO][6129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" iface="eth0" netns="" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.722 [INFO][6129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.722 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.777 [INFO][6136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.777 [INFO][6136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.777 [INFO][6136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.789 [WARNING][6136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.789 [INFO][6136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.791 [INFO][6136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.796973 containerd[2021]: 2025-01-29 11:52:24.794 [INFO][6129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.798928 containerd[2021]: time="2025-01-29T11:52:24.796976774Z" level=info msg="TearDown network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" successfully" Jan 29 11:52:24.798928 containerd[2021]: time="2025-01-29T11:52:24.797015198Z" level=info msg="StopPodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" returns successfully" Jan 29 11:52:24.798928 containerd[2021]: time="2025-01-29T11:52:24.798001814Z" level=info msg="RemovePodSandbox for \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\"" Jan 29 11:52:24.798928 containerd[2021]: time="2025-01-29T11:52:24.798053030Z" level=info msg="Forcibly stopping sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\"" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.869 [WARNING][6154] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0", GenerateName:"calico-kube-controllers-855db7d7f9-", Namespace:"calico-system", SelfLink:"", UID:"caa493ea-026d-43ba-9a30-a9a39462916f", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 51, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855db7d7f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-252", ContainerID:"568bb03f5724d6359153c5525de7ba7f8d808226abe8999171f49e2eb8f999e7", Pod:"calico-kube-controllers-855db7d7f9-phjqx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad0fb93d22e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.870 [INFO][6154] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.870 [INFO][6154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" iface="eth0" netns="" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.870 [INFO][6154] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.870 [INFO][6154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.908 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.909 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.909 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.922 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.922 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" HandleID="k8s-pod-network.41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Workload="ip--172--31--25--252-k8s-calico--kube--controllers--855db7d7f9--phjqx-eth0" Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.924 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:52:24.930692 containerd[2021]: 2025-01-29 11:52:24.927 [INFO][6154] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2" Jan 29 11:52:24.931983 containerd[2021]: time="2025-01-29T11:52:24.931161735Z" level=info msg="TearDown network for sandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" successfully" Jan 29 11:52:24.940130 containerd[2021]: time="2025-01-29T11:52:24.939996375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:52:24.940450 containerd[2021]: time="2025-01-29T11:52:24.940310631Z" level=info msg="RemovePodSandbox \"41da048d448bb7fcbe86a64849c9b38c09f332920768dc880584a2ee7e3346d2\" returns successfully" Jan 29 11:52:28.865661 systemd[1]: Started sshd@15-172.31.25.252:22-139.178.89.65:52310.service - OpenSSH per-connection server daemon (139.178.89.65:52310). Jan 29 11:52:29.052794 sshd[6173]: Accepted publickey for core from 139.178.89.65 port 52310 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:29.059504 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:29.073288 systemd-logind[1995]: New session 16 of user core. Jan 29 11:52:29.084440 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:52:29.344455 sshd[6173]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:29.351772 systemd[1]: sshd@15-172.31.25.252:22-139.178.89.65:52310.service: Deactivated successfully. Jan 29 11:52:29.356863 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:52:29.358776 systemd-logind[1995]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:52:29.360537 systemd-logind[1995]: Removed session 16. Jan 29 11:52:30.472307 kubelet[3506]: I0129 11:52:30.472198 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bmlgf" podStartSLOduration=46.708123817 podStartE2EDuration="53.472176174s" podCreationTimestamp="2025-01-29 11:51:37 +0000 UTC" firstStartedPulling="2025-01-29 11:52:08.220463036 +0000 UTC m=+45.498628787" lastFinishedPulling="2025-01-29 11:52:14.984515405 +0000 UTC m=+52.262681144" observedRunningTime="2025-01-29 11:52:15.704554205 +0000 UTC m=+52.982719968" watchObservedRunningTime="2025-01-29 11:52:30.472176174 +0000 UTC m=+67.750341937" Jan 29 11:52:34.141523 kubelet[3506]: I0129 11:52:34.141388 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:52:34.385660 systemd[1]: Started sshd@16-172.31.25.252:22-139.178.89.65:54046.service - OpenSSH per-connection server daemon (139.178.89.65:54046). Jan 29 11:52:34.576827 sshd[6213]: Accepted publickey for core from 139.178.89.65 port 54046 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:34.579545 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:34.590454 systemd-logind[1995]: New session 17 of user core. Jan 29 11:52:34.596407 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:52:34.931461 sshd[6213]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:34.939051 systemd[1]: sshd@16-172.31.25.252:22-139.178.89.65:54046.service: Deactivated successfully. Jan 29 11:52:34.945503 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:52:34.947961 systemd-logind[1995]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:52:34.952907 systemd-logind[1995]: Removed session 17. Jan 29 11:52:39.970043 systemd[1]: Started sshd@17-172.31.25.252:22-139.178.89.65:54058.service - OpenSSH per-connection server daemon (139.178.89.65:54058). Jan 29 11:52:40.162822 sshd[6228]: Accepted publickey for core from 139.178.89.65 port 54058 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:40.166358 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:40.175562 systemd-logind[1995]: New session 18 of user core. Jan 29 11:52:40.184492 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:52:40.467595 sshd[6228]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:40.476916 systemd[1]: sshd@17-172.31.25.252:22-139.178.89.65:54058.service: Deactivated successfully. Jan 29 11:52:40.481250 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:52:40.485502 systemd-logind[1995]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:52:40.489475 systemd-logind[1995]: Removed session 18. Jan 29 11:52:41.681889 systemd[1]: run-containerd-runc-k8s.io-118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a-runc.Ko9xXa.mount: Deactivated successfully. Jan 29 11:52:45.510510 systemd[1]: Started sshd@18-172.31.25.252:22-139.178.89.65:35118.service - OpenSSH per-connection server daemon (139.178.89.65:35118). Jan 29 11:52:45.695219 sshd[6266]: Accepted publickey for core from 139.178.89.65 port 35118 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:45.697939 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:45.708232 systemd-logind[1995]: New session 19 of user core. Jan 29 11:52:45.714414 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:52:45.968293 sshd[6266]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:45.976576 systemd-logind[1995]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:52:45.977886 systemd[1]: sshd@18-172.31.25.252:22-139.178.89.65:35118.service: Deactivated successfully. Jan 29 11:52:45.982728 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:52:45.985893 systemd-logind[1995]: Removed session 19. Jan 29 11:52:46.008637 systemd[1]: Started sshd@19-172.31.25.252:22-139.178.89.65:35122.service - OpenSSH per-connection server daemon (139.178.89.65:35122). Jan 29 11:52:46.190841 sshd[6279]: Accepted publickey for core from 139.178.89.65 port 35122 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:46.193546 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:46.201297 systemd-logind[1995]: New session 20 of user core. Jan 29 11:52:46.211378 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:52:46.705290 sshd[6279]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:46.712692 systemd[1]: sshd@19-172.31.25.252:22-139.178.89.65:35122.service: Deactivated successfully. Jan 29 11:52:46.717809 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:52:46.719704 systemd-logind[1995]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:52:46.722053 systemd-logind[1995]: Removed session 20. Jan 29 11:52:46.743626 systemd[1]: Started sshd@20-172.31.25.252:22-139.178.89.65:35136.service - OpenSSH per-connection server daemon (139.178.89.65:35136). Jan 29 11:52:46.928609 sshd[6290]: Accepted publickey for core from 139.178.89.65 port 35136 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:46.931356 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:46.942062 systemd-logind[1995]: New session 21 of user core. Jan 29 11:52:46.949364 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:52:48.032705 sshd[6290]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:48.041835 systemd[1]: sshd@20-172.31.25.252:22-139.178.89.65:35136.service: Deactivated successfully. Jan 29 11:52:48.049542 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:52:48.055957 systemd-logind[1995]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:52:48.081924 systemd[1]: Started sshd@21-172.31.25.252:22-139.178.89.65:35144.service - OpenSSH per-connection server daemon (139.178.89.65:35144). Jan 29 11:52:48.085198 systemd-logind[1995]: Removed session 21. Jan 29 11:52:48.284161 sshd[6307]: Accepted publickey for core from 139.178.89.65 port 35144 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:48.287529 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:48.297032 systemd-logind[1995]: New session 22 of user core. Jan 29 11:52:48.304433 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:52:48.819743 sshd[6307]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:48.827330 systemd-logind[1995]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:52:48.827535 systemd[1]: sshd@21-172.31.25.252:22-139.178.89.65:35144.service: Deactivated successfully. Jan 29 11:52:48.833413 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:52:48.838366 systemd-logind[1995]: Removed session 22. Jan 29 11:52:48.860726 systemd[1]: Started sshd@22-172.31.25.252:22-139.178.89.65:35156.service - OpenSSH per-connection server daemon (139.178.89.65:35156). Jan 29 11:52:49.044495 sshd[6321]: Accepted publickey for core from 139.178.89.65 port 35156 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:49.047278 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:49.056766 systemd-logind[1995]: New session 23 of user core. Jan 29 11:52:49.066502 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:52:49.316440 sshd[6321]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:49.321731 systemd-logind[1995]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:52:49.323412 systemd[1]: sshd@22-172.31.25.252:22-139.178.89.65:35156.service: Deactivated successfully. Jan 29 11:52:49.326631 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:52:49.331230 systemd-logind[1995]: Removed session 23. Jan 29 11:52:54.360585 systemd[1]: Started sshd@23-172.31.25.252:22-139.178.89.65:59558.service - OpenSSH per-connection server daemon (139.178.89.65:59558). Jan 29 11:52:54.541618 sshd[6335]: Accepted publickey for core from 139.178.89.65 port 59558 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:52:54.544462 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:54.553487 systemd-logind[1995]: New session 24 of user core. Jan 29 11:52:54.559359 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:52:54.805179 sshd[6335]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:54.811828 systemd[1]: sshd@23-172.31.25.252:22-139.178.89.65:59558.service: Deactivated successfully. Jan 29 11:52:54.816904 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:52:54.818751 systemd-logind[1995]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:52:54.821832 systemd-logind[1995]: Removed session 24. Jan 29 11:52:59.860693 systemd[1]: Started sshd@24-172.31.25.252:22-139.178.89.65:59572.service - OpenSSH per-connection server daemon (139.178.89.65:59572). Jan 29 11:53:00.062997 sshd[6354]: Accepted publickey for core from 139.178.89.65 port 59572 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:53:00.066762 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:00.082210 systemd-logind[1995]: New session 25 of user core. Jan 29 11:53:00.089779 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:53:00.359747 sshd[6354]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:00.369564 systemd[1]: sshd@24-172.31.25.252:22-139.178.89.65:59572.service: Deactivated successfully. Jan 29 11:53:00.377686 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:53:00.385748 systemd-logind[1995]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:53:00.390590 systemd-logind[1995]: Removed session 25. Jan 29 11:53:05.391604 systemd[1]: Started sshd@25-172.31.25.252:22-139.178.89.65:56222.service - OpenSSH per-connection server daemon (139.178.89.65:56222). Jan 29 11:53:05.571289 sshd[6406]: Accepted publickey for core from 139.178.89.65 port 56222 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:53:05.573726 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:05.586838 systemd-logind[1995]: New session 26 of user core. Jan 29 11:53:05.593373 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:53:05.882421 sshd[6406]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:05.893802 systemd[1]: sshd@25-172.31.25.252:22-139.178.89.65:56222.service: Deactivated successfully. Jan 29 11:53:05.901630 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:53:05.904586 systemd-logind[1995]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:53:05.907356 systemd-logind[1995]: Removed session 26. Jan 29 11:53:10.923981 systemd[1]: Started sshd@26-172.31.25.252:22-139.178.89.65:56236.service - OpenSSH per-connection server daemon (139.178.89.65:56236). Jan 29 11:53:11.115237 sshd[6418]: Accepted publickey for core from 139.178.89.65 port 56236 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:53:11.118866 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:11.127589 systemd-logind[1995]: New session 27 of user core. Jan 29 11:53:11.133367 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:53:11.385506 sshd[6418]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:11.393718 systemd[1]: sshd@26-172.31.25.252:22-139.178.89.65:56236.service: Deactivated successfully. Jan 29 11:53:11.398676 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:53:11.401637 systemd-logind[1995]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:53:11.404729 systemd-logind[1995]: Removed session 27. Jan 29 11:53:16.433674 systemd[1]: Started sshd@27-172.31.25.252:22-139.178.89.65:49522.service - OpenSSH per-connection server daemon (139.178.89.65:49522). Jan 29 11:53:16.617753 sshd[6451]: Accepted publickey for core from 139.178.89.65 port 49522 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:53:16.620745 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:16.629475 systemd-logind[1995]: New session 28 of user core. Jan 29 11:53:16.640380 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:53:16.885862 sshd[6451]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:16.892700 systemd[1]: sshd@27-172.31.25.252:22-139.178.89.65:49522.service: Deactivated successfully. Jan 29 11:53:16.896939 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:53:16.899393 systemd-logind[1995]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:53:16.902771 systemd-logind[1995]: Removed session 28. Jan 29 11:53:21.928646 systemd[1]: Started sshd@28-172.31.25.252:22-139.178.89.65:53596.service - OpenSSH per-connection server daemon (139.178.89.65:53596). Jan 29 11:53:22.100867 sshd[6464]: Accepted publickey for core from 139.178.89.65 port 53596 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:53:22.103817 sshd[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:22.113495 systemd-logind[1995]: New session 29 of user core. Jan 29 11:53:22.124458 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:53:22.211658 systemd[1]: Started sshd@29-172.31.25.252:22-186.247.238.86:46333.service - OpenSSH per-connection server daemon (186.247.238.86:46333). Jan 29 11:53:22.384999 sshd[6464]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:22.392392 systemd[1]: sshd@28-172.31.25.252:22-139.178.89.65:53596.service: Deactivated successfully. Jan 29 11:53:22.397209 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:53:22.401177 systemd-logind[1995]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:53:22.403933 systemd-logind[1995]: Removed session 29. Jan 29 11:53:24.729550 sshd[6468]: Invalid user Admin from 186.247.238.86 port 46333 Jan 29 11:53:25.171902 sshd[6487]: pam_faillock(sshd:auth): User unknown Jan 29 11:53:25.175001 sshd[6468]: Postponed keyboard-interactive for invalid user Admin from 186.247.238.86 port 46333 ssh2 [preauth] Jan 29 11:53:25.800989 sshd[6487]: pam_unix(sshd:auth): check pass; user unknown Jan 29 11:53:25.801054 sshd[6487]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=186.247.238.86 Jan 29 11:53:25.802256 sshd[6487]: pam_faillock(sshd:auth): User unknown Jan 29 11:53:27.425659 systemd[1]: Started sshd@30-172.31.25.252:22-139.178.89.65:53612.service - OpenSSH per-connection server daemon (139.178.89.65:53612). Jan 29 11:53:27.609149 sshd[6489]: Accepted publickey for core from 139.178.89.65 port 53612 ssh2: RSA SHA256:wE0gCvjCVP+d0GmGS1pTTaOfqypOaWtJonSHKa9qiOA Jan 29 11:53:27.612353 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:27.621200 systemd-logind[1995]: New session 30 of user core. Jan 29 11:53:27.628409 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 11:53:27.880393 sshd[6489]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:27.892605 systemd[1]: sshd@30-172.31.25.252:22-139.178.89.65:53612.service: Deactivated successfully. Jan 29 11:53:27.903027 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 11:53:27.908136 systemd-logind[1995]: Session 30 logged out. Waiting for processes to exit. Jan 29 11:53:27.913268 systemd-logind[1995]: Removed session 30. Jan 29 11:53:28.355311 sshd[6468]: PAM: Permission denied for illegal user Admin from 186.247.238.86 Jan 29 11:53:28.357858 sshd[6468]: Failed keyboard-interactive/pam for invalid user Admin from 186.247.238.86 port 46333 ssh2 Jan 29 11:53:28.946753 sshd[6468]: Connection closed by invalid user Admin 186.247.238.86 port 46333 [preauth] Jan 29 11:53:28.958002 systemd[1]: sshd@29-172.31.25.252:22-186.247.238.86:46333.service: Deactivated successfully. Jan 29 11:53:41.675501 systemd[1]: run-containerd-runc-k8s.io-118241fe5de45ddac680fd5700511031ebd2eb0749e3d833eaadc04085be708a-runc.RohVeF.mount: Deactivated successfully. Jan 29 11:53:41.917672 systemd[1]: cri-containerd-b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4.scope: Deactivated successfully. Jan 29 11:53:41.920351 systemd[1]: cri-containerd-b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4.scope: Consumed 5.069s CPU time, 20.0M memory peak, 0B memory swap peak. Jan 29 11:53:41.965861 containerd[2021]: time="2025-01-29T11:53:41.965425157Z" level=info msg="shim disconnected" id=b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4 namespace=k8s.io Jan 29 11:53:41.965861 containerd[2021]: time="2025-01-29T11:53:41.965673377Z" level=warning msg="cleaning up after shim disconnected" id=b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4 namespace=k8s.io Jan 29 11:53:41.965861 containerd[2021]: time="2025-01-29T11:53:41.965700941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:41.966966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4-rootfs.mount: Deactivated successfully. Jan 29 11:53:42.149696 systemd[1]: cri-containerd-236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8.scope: Deactivated successfully. Jan 29 11:53:42.150591 systemd[1]: cri-containerd-236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8.scope: Consumed 7.003s CPU time. Jan 29 11:53:42.191053 containerd[2021]: time="2025-01-29T11:53:42.190750274Z" level=info msg="shim disconnected" id=236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8 namespace=k8s.io Jan 29 11:53:42.191053 containerd[2021]: time="2025-01-29T11:53:42.190914422Z" level=warning msg="cleaning up after shim disconnected" id=236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8 namespace=k8s.io Jan 29 11:53:42.191053 containerd[2021]: time="2025-01-29T11:53:42.190957706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:42.201138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8-rootfs.mount: Deactivated successfully. Jan 29 11:53:43.003017 kubelet[3506]: I0129 11:53:43.002946 3506 scope.go:117] "RemoveContainer" containerID="b63add6cc0be90282e3a21c5fab1f16eaaa72a60074d5e001235599201261ba4" Jan 29 11:53:43.007793 containerd[2021]: time="2025-01-29T11:53:43.007714707Z" level=info msg="CreateContainer within sandbox \"f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 11:53:43.008717 kubelet[3506]: I0129 11:53:43.008679 3506 scope.go:117] "RemoveContainer" containerID="236888f7b53f635e7f84665a87f1f8078d5cc6e6c5de200d3f0430de2e5fdca8" Jan 29 11:53:43.012758 containerd[2021]: time="2025-01-29T11:53:43.012199719Z" level=info msg="CreateContainer within sandbox \"d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 29 11:53:43.044451 containerd[2021]: time="2025-01-29T11:53:43.044379927Z" level=info msg="CreateContainer within sandbox \"d83321002ffd3f8d65b6e9f37155382a11c8a68bac9e33861c200608f3755527\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"df3c5ca80251e2229335c60e0559c6c7aa3836829493de57bf8bdcc4286d6480\"" Jan 29 11:53:43.048527 containerd[2021]: time="2025-01-29T11:53:43.046335603Z" level=info msg="StartContainer for \"df3c5ca80251e2229335c60e0559c6c7aa3836829493de57bf8bdcc4286d6480\"" Jan 29 11:53:43.053973 containerd[2021]: time="2025-01-29T11:53:43.053823387Z" level=info msg="CreateContainer within sandbox \"f36f788cdfb749ce522fd6119e000ec2f645dc65b55d0cc8bef9ffe2427656f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"17aaa09fdec3177e0580b75d0295d7aa9948fd7af7ec8ddbf0730e02026a9e9d\"" Jan 29 11:53:43.055815 containerd[2021]: time="2025-01-29T11:53:43.055759263Z" level=info msg="StartContainer for \"17aaa09fdec3177e0580b75d0295d7aa9948fd7af7ec8ddbf0730e02026a9e9d\"" Jan 29 11:53:43.142980 systemd[1]: Started cri-containerd-df3c5ca80251e2229335c60e0559c6c7aa3836829493de57bf8bdcc4286d6480.scope - libcontainer container df3c5ca80251e2229335c60e0559c6c7aa3836829493de57bf8bdcc4286d6480. Jan 29 11:53:43.168530 systemd[1]: Started cri-containerd-17aaa09fdec3177e0580b75d0295d7aa9948fd7af7ec8ddbf0730e02026a9e9d.scope - libcontainer container 17aaa09fdec3177e0580b75d0295d7aa9948fd7af7ec8ddbf0730e02026a9e9d. Jan 29 11:53:43.226917 containerd[2021]: time="2025-01-29T11:53:43.225371836Z" level=info msg="StartContainer for \"df3c5ca80251e2229335c60e0559c6c7aa3836829493de57bf8bdcc4286d6480\" returns successfully" Jan 29 11:53:43.260939 containerd[2021]: time="2025-01-29T11:53:43.260687308Z" level=info msg="StartContainer for \"17aaa09fdec3177e0580b75d0295d7aa9948fd7af7ec8ddbf0730e02026a9e9d\" returns successfully" Jan 29 11:53:45.896431 kubelet[3506]: E0129 11:53:45.896331 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-252?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:53:46.544285 systemd[1]: cri-containerd-a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e.scope: Deactivated successfully. Jan 29 11:53:46.544766 systemd[1]: cri-containerd-a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e.scope: Consumed 4.293s CPU time, 16.3M memory peak, 0B memory swap peak. Jan 29 11:53:46.589916 containerd[2021]: time="2025-01-29T11:53:46.589548092Z" level=info msg="shim disconnected" id=a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e namespace=k8s.io Jan 29 11:53:46.589916 containerd[2021]: time="2025-01-29T11:53:46.589635164Z" level=warning msg="cleaning up after shim disconnected" id=a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e namespace=k8s.io Jan 29 11:53:46.589916 containerd[2021]: time="2025-01-29T11:53:46.589657316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:46.601635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e-rootfs.mount: Deactivated successfully. Jan 29 11:53:47.037156 kubelet[3506]: I0129 11:53:47.037055 3506 scope.go:117] "RemoveContainer" containerID="a54fefdcbab54cdad562d1e4d7c9f66f16b9717f93cb8fd997e402afb964c98e" Jan 29 11:53:47.041112 containerd[2021]: time="2025-01-29T11:53:47.040973179Z" level=info msg="CreateContainer within sandbox \"b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 11:53:47.067373 containerd[2021]: time="2025-01-29T11:53:47.067236343Z" level=info msg="CreateContainer within sandbox \"b9f97330e8fbaa3fc001378ec045ef92f1e159d1b42b7ed32c4d0a4aef33a3e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d8f6ecf195155bbd621f54283dd52fe3062012d4fe606fe61bd7e02b309557ec\"" Jan 29 11:53:47.069413 containerd[2021]: time="2025-01-29T11:53:47.068251591Z" level=info msg="StartContainer for \"d8f6ecf195155bbd621f54283dd52fe3062012d4fe606fe61bd7e02b309557ec\"" Jan 29 11:53:47.136397 systemd[1]: Started cri-containerd-d8f6ecf195155bbd621f54283dd52fe3062012d4fe606fe61bd7e02b309557ec.scope - libcontainer container d8f6ecf195155bbd621f54283dd52fe3062012d4fe606fe61bd7e02b309557ec. Jan 29 11:53:47.205894 containerd[2021]: time="2025-01-29T11:53:47.205528135Z" level=info msg="StartContainer for \"d8f6ecf195155bbd621f54283dd52fe3062012d4fe606fe61bd7e02b309557ec\" returns successfully" Jan 29 11:53:55.897775 kubelet[3506]: E0129 11:53:55.897613 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.252:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-252?timeout=10s\": context deadline exceeded"