Jan 17 12:01:11.186940 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 12:01:11.186985 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:01:11.187012 kernel: KASLR disabled due to lack of seed Jan 17 12:01:11.187028 kernel: efi: EFI v2.7 by EDK II Jan 17 12:01:11.187064 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 17 12:01:11.187084 kernel: ACPI: Early table checksum verification disabled Jan 17 12:01:11.187102 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 12:01:11.187118 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 12:01:11.187134 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:01:11.187149 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 17 12:01:11.187172 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:01:11.187188 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 12:01:11.187204 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 12:01:11.187219 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 12:01:11.187238 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:01:11.187259 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 12:01:11.187276 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 12:01:11.187349 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 12:01:11.187373 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 12:01:11.187390 kernel: printk: bootconsole [uart0] enabled Jan 17 12:01:11.187406 kernel: NUMA: Failed to initialise from firmware Jan 17 12:01:11.187423 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:01:11.187439 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 12:01:11.187455 kernel: Zone ranges: Jan 17 12:01:11.187472 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 12:01:11.187488 kernel: DMA32 empty Jan 17 12:01:11.187511 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 12:01:11.187528 kernel: Movable zone start for each node Jan 17 12:01:11.187544 kernel: Early memory node ranges Jan 17 12:01:11.187560 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 12:01:11.187577 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 12:01:11.187593 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 12:01:11.187609 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 12:01:11.187626 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 12:01:11.187642 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 12:01:11.187658 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 12:01:11.187675 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 12:01:11.187691 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:01:11.187712 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 12:01:11.187729 kernel: psci: probing for conduit method from ACPI. Jan 17 12:01:11.187754 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 12:01:11.187771 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:01:11.187789 kernel: psci: Trusted OS migration not required Jan 17 12:01:11.187811 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:01:11.187828 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:01:11.187845 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:01:11.187863 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:01:11.187880 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:01:11.187897 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:01:11.187914 kernel: CPU features: detected: Spectre-v2 Jan 17 12:01:11.187931 kernel: CPU features: detected: Spectre-v3a Jan 17 12:01:11.187948 kernel: CPU features: detected: Spectre-BHB Jan 17 12:01:11.187966 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 12:01:11.187983 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 12:01:11.188004 kernel: alternatives: applying boot alternatives Jan 17 12:01:11.188026 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:01:11.188044 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:01:11.188062 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:01:11.188079 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:01:11.188096 kernel: Fallback order for Node 0: 0 Jan 17 12:01:11.188113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 12:01:11.188131 kernel: Policy zone: Normal Jan 17 12:01:11.188148 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:01:11.188165 kernel: software IO TLB: area num 2. Jan 17 12:01:11.188183 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 12:01:11.188207 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 17 12:01:11.188224 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:01:11.188242 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:01:11.188260 kernel: rcu: RCU event tracing is enabled. Jan 17 12:01:11.188278 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:01:11.188328 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:01:11.188351 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:01:11.188368 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:01:11.188386 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:01:11.188403 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:01:11.188420 kernel: GICv3: 96 SPIs implemented Jan 17 12:01:11.188444 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:01:11.188461 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:01:11.188478 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 12:01:11.188495 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 12:01:11.188512 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 12:01:11.188529 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:01:11.188547 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:01:11.188564 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 12:01:11.188581 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 12:01:11.188598 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 12:01:11.188615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:01:11.188632 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 12:01:11.188655 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 12:01:11.188672 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 12:01:11.188689 kernel: Console: colour dummy device 80x25 Jan 17 12:01:11.188707 kernel: printk: console [tty1] enabled Jan 17 12:01:11.188725 kernel: ACPI: Core revision 20230628 Jan 17 12:01:11.188743 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 12:01:11.188761 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:01:11.188779 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:01:11.188796 kernel: landlock: Up and running. Jan 17 12:01:11.188818 kernel: SELinux: Initializing. Jan 17 12:01:11.188837 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:01:11.188854 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:01:11.188872 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:01:11.188890 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:01:11.188908 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:01:11.188926 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:01:11.188943 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 12:01:11.188960 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 12:01:11.188982 kernel: Remapping and enabling EFI services. Jan 17 12:01:11.189000 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:01:11.189018 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:01:11.189035 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 12:01:11.189052 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 12:01:11.189070 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 12:01:11.189087 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:01:11.189105 kernel: SMP: Total of 2 processors activated. Jan 17 12:01:11.189122 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:01:11.189145 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 12:01:11.189163 kernel: CPU features: detected: CRC32 instructions Jan 17 12:01:11.189181 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:01:11.189214 kernel: alternatives: applying system-wide alternatives Jan 17 12:01:11.189239 kernel: devtmpfs: initialized Jan 17 12:01:11.189258 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:01:11.189277 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:01:11.189316 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:01:11.189339 kernel: SMBIOS 3.0.0 present. Jan 17 12:01:11.189358 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 12:01:11.189383 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:01:11.189402 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:01:11.189421 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:01:11.189439 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:01:11.189458 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:01:11.189476 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Jan 17 12:01:11.189495 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:01:11.189519 kernel: cpuidle: using governor menu Jan 17 12:01:11.189537 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:01:11.189555 kernel: ASID allocator initialised with 65536 entries Jan 17 12:01:11.189574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:01:11.189592 kernel: Serial: AMBA PL011 UART driver Jan 17 12:01:11.189610 kernel: Modules: 17520 pages in range for non-PLT usage Jan 17 12:01:11.189629 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:01:11.189647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:01:11.189666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:01:11.189690 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:01:11.189708 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:01:11.189727 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:01:11.189745 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:01:11.189763 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:01:11.189781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:01:11.189799 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:01:11.189817 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:01:11.189835 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:01:11.189858 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:01:11.189876 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:01:11.189894 kernel: ACPI: Interpreter enabled Jan 17 12:01:11.189912 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:01:11.189930 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:01:11.189948 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 17 12:01:11.190271 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:01:11.190520 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:01:11.190746 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:01:11.190970 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 17 12:01:11.191230 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 17 12:01:11.191259 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 12:01:11.191279 kernel: acpiphp: Slot [1] registered Jan 17 12:01:11.191326 kernel: acpiphp: Slot [2] registered Jan 17 12:01:11.191347 kernel: acpiphp: Slot [3] registered Jan 17 12:01:11.191366 kernel: acpiphp: Slot [4] registered Jan 17 12:01:11.191395 kernel: acpiphp: Slot [5] registered Jan 17 12:01:11.191414 kernel: acpiphp: Slot [6] registered Jan 17 12:01:11.191432 kernel: acpiphp: Slot [7] registered Jan 17 12:01:11.191451 kernel: acpiphp: Slot [8] registered Jan 17 12:01:11.191469 kernel: acpiphp: Slot [9] registered Jan 17 12:01:11.191488 kernel: acpiphp: Slot [10] registered Jan 17 12:01:11.191507 kernel: acpiphp: Slot [11] registered Jan 17 12:01:11.191526 kernel: acpiphp: Slot [12] registered Jan 17 12:01:11.191545 kernel: acpiphp: Slot [13] registered Jan 17 12:01:11.191564 kernel: acpiphp: Slot [14] registered Jan 17 12:01:11.191589 kernel: acpiphp: Slot [15] registered Jan 17 12:01:11.191608 kernel: acpiphp: Slot [16] registered Jan 17 12:01:11.191626 kernel: acpiphp: Slot [17] registered Jan 17 12:01:11.191645 kernel: acpiphp: Slot [18] registered Jan 17 12:01:11.191665 kernel: acpiphp: Slot [19] registered Jan 17 12:01:11.191684 kernel: acpiphp: Slot [20] registered Jan 17 12:01:11.191703 kernel: acpiphp: Slot [21] registered Jan 17 12:01:11.191721 kernel: acpiphp: Slot [22] registered Jan 17 12:01:11.191739 kernel: acpiphp: Slot [23] registered Jan 17 12:01:11.191763 kernel: acpiphp: Slot [24] registered Jan 17 12:01:11.191782 kernel: acpiphp: Slot [25] registered Jan 17 12:01:11.191800 kernel: acpiphp: Slot [26] registered Jan 17 12:01:11.191818 kernel: acpiphp: Slot [27] registered Jan 17 12:01:11.191836 kernel: acpiphp: Slot [28] registered Jan 17 12:01:11.191854 kernel: acpiphp: Slot [29] registered Jan 17 12:01:11.191872 kernel: acpiphp: Slot [30] registered Jan 17 12:01:11.191890 kernel: acpiphp: Slot [31] registered Jan 17 12:01:11.191909 kernel: PCI host bridge to bus 0000:00 Jan 17 12:01:11.192171 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 12:01:11.194500 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:01:11.194719 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 12:01:11.194909 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 17 12:01:11.195170 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 12:01:11.195437 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 12:01:11.195653 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 12:01:11.195889 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:01:11.196101 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 12:01:11.197549 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:01:11.197829 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:01:11.198042 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 12:01:11.198248 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 12:01:11.198496 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 12:01:11.198707 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:01:11.198926 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 17 12:01:11.199170 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 17 12:01:11.203029 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 17 12:01:11.203289 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 17 12:01:11.203532 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 17 12:01:11.203758 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 12:01:11.203976 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:01:11.204159 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 12:01:11.204185 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:01:11.204205 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:01:11.204224 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:01:11.204243 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:01:11.204261 kernel: iommu: Default domain type: Translated Jan 17 12:01:11.204279 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:01:11.204326 kernel: efivars: Registered efivars operations Jan 17 12:01:11.204347 kernel: vgaarb: loaded Jan 17 12:01:11.204366 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:01:11.204384 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:01:11.204403 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:01:11.204421 kernel: pnp: PnP ACPI init Jan 17 12:01:11.204634 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 12:01:11.204662 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:01:11.204693 kernel: NET: Registered PF_INET protocol family Jan 17 12:01:11.204714 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:01:11.204733 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:01:11.204752 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:01:11.204770 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:01:11.204789 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:01:11.204808 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:01:11.204826 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:01:11.204844 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:01:11.204868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:01:11.204886 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:01:11.204905 kernel: kvm [1]: HYP mode not available Jan 17 12:01:11.204923 kernel: Initialise system trusted keyrings Jan 17 12:01:11.204941 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:01:11.204960 kernel: Key type asymmetric registered Jan 17 12:01:11.204978 kernel: Asymmetric key parser 'x509' registered Jan 17 12:01:11.204996 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:01:11.205015 kernel: io scheduler mq-deadline registered Jan 17 12:01:11.205038 kernel: io scheduler kyber registered Jan 17 12:01:11.205056 kernel: io scheduler bfq registered Jan 17 12:01:11.205267 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 12:01:11.206397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:01:11.206438 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:01:11.206458 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 12:01:11.206477 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 12:01:11.206497 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:01:11.206528 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 12:01:11.206804 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 12:01:11.206834 kernel: printk: console [ttyS0] disabled Jan 17 12:01:11.206853 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 12:01:11.206873 kernel: printk: console [ttyS0] enabled Jan 17 12:01:11.206892 kernel: printk: bootconsole [uart0] disabled Jan 17 12:01:11.206911 kernel: thunder_xcv, ver 1.0 Jan 17 12:01:11.206930 kernel: thunder_bgx, ver 1.0 Jan 17 12:01:11.206949 kernel: nicpf, ver 1.0 Jan 17 12:01:11.206975 kernel: nicvf, ver 1.0 Jan 17 12:01:11.207239 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:01:11.210556 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:01:10 UTC (1737115270) Jan 17 12:01:11.210599 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:01:11.210619 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 12:01:11.210638 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:01:11.210657 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:01:11.210675 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:01:11.210704 kernel: Segment Routing with IPv6 Jan 17 12:01:11.210723 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:01:11.210741 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:01:11.210759 kernel: Key type dns_resolver registered Jan 17 12:01:11.210778 kernel: registered taskstats version 1 Jan 17 12:01:11.210796 kernel: Loading compiled-in X.509 certificates Jan 17 12:01:11.210815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:01:11.210833 kernel: Key type .fscrypt registered Jan 17 12:01:11.210851 kernel: Key type fscrypt-provisioning registered Jan 17 12:01:11.210873 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:01:11.210892 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:01:11.210910 kernel: ima: No architecture policies found Jan 17 12:01:11.210929 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:01:11.210948 kernel: clk: Disabling unused clocks Jan 17 12:01:11.210967 kernel: Freeing unused kernel memory: 39360K Jan 17 12:01:11.210985 kernel: Run /init as init process Jan 17 12:01:11.211004 kernel: with arguments: Jan 17 12:01:11.211022 kernel: /init Jan 17 12:01:11.211040 kernel: with environment: Jan 17 12:01:11.211083 kernel: HOME=/ Jan 17 12:01:11.211103 kernel: TERM=linux Jan 17 12:01:11.211122 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:01:11.211145 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:01:11.211169 systemd[1]: Detected virtualization amazon. Jan 17 12:01:11.211189 systemd[1]: Detected architecture arm64. Jan 17 12:01:11.211209 systemd[1]: Running in initrd. Jan 17 12:01:11.211233 systemd[1]: No hostname configured, using default hostname. Jan 17 12:01:11.211253 systemd[1]: Hostname set to . Jan 17 12:01:11.211274 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:01:11.213206 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:01:11.213253 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:11.213274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:11.213341 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:01:11.213367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:01:11.213400 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:01:11.213421 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:01:11.213446 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:01:11.213467 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:01:11.213487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:11.213508 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:11.213531 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:01:11.213556 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:01:11.213577 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:01:11.213597 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:01:11.213617 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:01:11.213638 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:01:11.213659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:01:11.213679 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:01:11.213700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:11.213721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:11.213746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:11.213767 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:01:11.213787 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:01:11.213807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:01:11.213828 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:01:11.213848 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:01:11.213868 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:01:11.213888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:01:11.213913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:11.213935 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:01:11.214004 systemd-journald[250]: Collecting audit messages is disabled. Jan 17 12:01:11.214051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:11.214077 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:01:11.214098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:01:11.214119 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:01:11.214139 kernel: Bridge firewalling registered Jan 17 12:01:11.214165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:11.214185 systemd-journald[250]: Journal started Jan 17 12:01:11.214223 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2ce154237271e8d3b2cfc296d9f7bf) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:01:11.166829 systemd-modules-load[251]: Inserted module 'overlay' Jan 17 12:01:11.206445 systemd-modules-load[251]: Inserted module 'br_netfilter' Jan 17 12:01:11.235184 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:01:11.225093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:11.238573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:11.254609 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:01:11.263061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:01:11.264453 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:01:11.282853 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:01:11.316444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:11.326176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:11.337681 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:11.345336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:11.360608 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:01:11.367591 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:01:11.392382 dracut-cmdline[287]: dracut-dracut-053 Jan 17 12:01:11.397982 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:01:11.459080 systemd-resolved[288]: Positive Trust Anchors: Jan 17 12:01:11.461062 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:01:11.464434 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:01:11.551334 kernel: SCSI subsystem initialized Jan 17 12:01:11.558326 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:01:11.571327 kernel: iscsi: registered transport (tcp) Jan 17 12:01:11.593454 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:01:11.593541 kernel: QLogic iSCSI HBA Driver Jan 17 12:01:11.684430 kernel: random: crng init done Jan 17 12:01:11.684755 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 17 12:01:11.688385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:01:11.690572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:11.713601 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:01:11.724619 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:01:11.767898 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:01:11.768023 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:01:11.768052 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:01:11.834344 kernel: raid6: neonx8 gen() 6720 MB/s Jan 17 12:01:11.851336 kernel: raid6: neonx4 gen() 6521 MB/s Jan 17 12:01:11.868332 kernel: raid6: neonx2 gen() 5427 MB/s Jan 17 12:01:11.885330 kernel: raid6: neonx1 gen() 3947 MB/s Jan 17 12:01:11.902330 kernel: raid6: int64x8 gen() 3829 MB/s Jan 17 12:01:11.919330 kernel: raid6: int64x4 gen() 3708 MB/s Jan 17 12:01:11.936332 kernel: raid6: int64x2 gen() 3619 MB/s Jan 17 12:01:11.954096 kernel: raid6: int64x1 gen() 2767 MB/s Jan 17 12:01:11.954136 kernel: raid6: using algorithm neonx8 gen() 6720 MB/s Jan 17 12:01:11.972059 kernel: raid6: .... xor() 4879 MB/s, rmw enabled Jan 17 12:01:11.972110 kernel: raid6: using neon recovery algorithm Jan 17 12:01:11.980429 kernel: xor: measuring software checksum speed Jan 17 12:01:11.980494 kernel: 8regs : 10978 MB/sec Jan 17 12:01:11.981494 kernel: 32regs : 11927 MB/sec Jan 17 12:01:11.982652 kernel: arm64_neon : 9516 MB/sec Jan 17 12:01:11.982684 kernel: xor: using function: 32regs (11927 MB/sec) Jan 17 12:01:12.066343 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:01:12.085855 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:01:12.095643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:12.141686 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 17 12:01:12.150676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:12.170683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:01:12.196653 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jan 17 12:01:12.253064 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:01:12.266576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:01:12.376730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:12.388466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:01:12.437220 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:01:12.442119 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:01:12.444475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:12.446708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:01:12.461146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:01:12.501667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:01:12.557434 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:01:12.557505 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 12:01:12.586030 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:01:12.588447 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:01:12.588710 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:59:9f:8d:cc:05 Jan 17 12:01:12.585918 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:01:12.586140 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:12.589657 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:12.591787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:01:12.592064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:12.594463 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:12.599747 (udev-worker)[541]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:12.612203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:12.639419 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 12:01:12.639503 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:01:12.648560 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:01:12.652355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:12.662394 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:01:12.662430 kernel: GPT:9289727 != 16777215 Jan 17 12:01:12.662456 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:01:12.662481 kernel: GPT:9289727 != 16777215 Jan 17 12:01:12.662505 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:01:12.662530 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:12.667648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:12.703104 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:12.741342 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (527) Jan 17 12:01:12.778324 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (525) Jan 17 12:01:12.809375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:01:12.882229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:01:12.910357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:01:12.924088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:01:12.929960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:01:12.945534 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:01:12.959178 disk-uuid[658]: Primary Header is updated. Jan 17 12:01:12.959178 disk-uuid[658]: Secondary Entries is updated. Jan 17 12:01:12.959178 disk-uuid[658]: Secondary Header is updated. Jan 17 12:01:12.970353 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:12.978358 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:12.989330 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:13.989320 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:13.991224 disk-uuid[659]: The operation has completed successfully. Jan 17 12:01:14.182689 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:01:14.184551 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:01:14.233624 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:01:14.243414 sh[1003]: Success Jan 17 12:01:14.271352 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:01:14.398412 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:01:14.407508 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:01:14.410064 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:01:14.462389 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:01:14.462454 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:14.464141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:01:14.465415 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:01:14.466442 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:01:14.493325 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:01:14.508480 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:01:14.512364 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:01:14.523582 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:01:14.529583 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:01:14.570673 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:14.570759 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:14.571977 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:14.579350 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:14.595824 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:01:14.600707 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:14.613853 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:01:14.628639 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:01:14.725807 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:01:14.763752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:01:14.799776 ignition[1128]: Ignition 2.19.0 Jan 17 12:01:14.799806 ignition[1128]: Stage: fetch-offline Jan 17 12:01:14.801449 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:14.801474 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:14.801933 ignition[1128]: Ignition finished successfully Jan 17 12:01:14.823635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:01:14.824723 systemd-networkd[1201]: lo: Link UP Jan 17 12:01:14.824730 systemd-networkd[1201]: lo: Gained carrier Jan 17 12:01:14.828167 systemd-networkd[1201]: Enumeration completed Jan 17 12:01:14.828883 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:14.828889 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:01:14.830546 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:01:14.833100 systemd[1]: Reached target network.target - Network. Jan 17 12:01:14.833988 systemd-networkd[1201]: eth0: Link UP Jan 17 12:01:14.834062 systemd-networkd[1201]: eth0: Gained carrier Jan 17 12:01:14.834511 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:14.861711 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:01:14.871446 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.18.94/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:01:14.895546 ignition[1205]: Ignition 2.19.0 Jan 17 12:01:14.898442 ignition[1205]: Stage: fetch Jan 17 12:01:14.900232 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:14.900274 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:14.901927 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:14.913562 ignition[1205]: PUT result: OK Jan 17 12:01:14.916669 ignition[1205]: parsed url from cmdline: "" Jan 17 12:01:14.916685 ignition[1205]: no config URL provided Jan 17 12:01:14.916702 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:01:14.916727 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:01:14.916759 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:14.931804 unknown[1205]: fetched base config from "system" Jan 17 12:01:14.918472 ignition[1205]: PUT result: OK Jan 17 12:01:14.931821 unknown[1205]: fetched base config from "system" Jan 17 12:01:14.918584 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:01:14.931834 unknown[1205]: fetched user config from "aws" Jan 17 12:01:14.921926 ignition[1205]: GET result: OK Jan 17 12:01:14.938529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:01:14.922914 ignition[1205]: parsing config with SHA512: ee4ddf60ec6be73635c96c2c5f31ddf03a617002861e87b46012fca6627450c181831840b9311fc04779891cf2e6febe78b3adf2735598750090a3f722e7c11b Jan 17 12:01:14.932574 ignition[1205]: fetch: fetch complete Jan 17 12:01:14.932586 ignition[1205]: fetch: fetch passed Jan 17 12:01:14.932662 ignition[1205]: Ignition finished successfully Jan 17 12:01:14.960492 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:01:14.986203 ignition[1212]: Ignition 2.19.0 Jan 17 12:01:14.986726 ignition[1212]: Stage: kargs Jan 17 12:01:14.987436 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:14.987488 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:14.987653 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:14.991609 ignition[1212]: PUT result: OK Jan 17 12:01:14.999511 ignition[1212]: kargs: kargs passed Jan 17 12:01:14.999663 ignition[1212]: Ignition finished successfully Jan 17 12:01:15.003956 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:01:15.010574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:01:15.048655 ignition[1219]: Ignition 2.19.0 Jan 17 12:01:15.048686 ignition[1219]: Stage: disks Jan 17 12:01:15.051727 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:15.051776 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:15.052103 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:15.055570 ignition[1219]: PUT result: OK Jan 17 12:01:15.061476 ignition[1219]: disks: disks passed Jan 17 12:01:15.061647 ignition[1219]: Ignition finished successfully Jan 17 12:01:15.067551 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:01:15.073697 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:01:15.076466 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:01:15.078826 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:01:15.080669 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:01:15.082518 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:01:15.101680 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:01:15.146626 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:01:15.150981 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:01:15.162162 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:01:15.261331 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:01:15.263404 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:01:15.265569 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:01:15.282574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:01:15.288494 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:01:15.292124 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:01:15.292222 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:01:15.292270 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:01:15.312337 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Jan 17 12:01:15.316009 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:15.316057 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:15.316084 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:15.326152 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:01:15.334029 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:15.337638 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:01:15.344020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:01:15.480721 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:01:15.489800 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:01:15.498546 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:01:15.507540 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:01:15.653421 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:01:15.663526 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:01:15.677656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:01:15.695585 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:01:15.698503 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:15.737649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:01:15.744152 ignition[1362]: INFO : Ignition 2.19.0 Jan 17 12:01:15.744152 ignition[1362]: INFO : Stage: mount Jan 17 12:01:15.747400 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:15.747400 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:15.751452 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:15.754454 ignition[1362]: INFO : PUT result: OK Jan 17 12:01:15.759155 ignition[1362]: INFO : mount: mount passed Jan 17 12:01:15.762341 ignition[1362]: INFO : Ignition finished successfully Jan 17 12:01:15.761895 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:01:15.774557 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:01:15.804653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:01:15.836317 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) Jan 17 12:01:15.841254 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:15.841343 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:15.841373 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:15.846315 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:15.850694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:01:15.883749 ignition[1389]: INFO : Ignition 2.19.0 Jan 17 12:01:15.883749 ignition[1389]: INFO : Stage: files Jan 17 12:01:15.886945 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:15.886945 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:15.886945 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:15.893826 ignition[1389]: INFO : PUT result: OK Jan 17 12:01:15.898787 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:01:15.902496 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:01:15.902496 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:01:15.912109 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:01:15.914795 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:01:15.917701 unknown[1389]: wrote ssh authorized keys file for user: core Jan 17 12:01:15.919904 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:01:15.923704 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:01:15.927333 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:01:15.963463 systemd-networkd[1201]: eth0: Gained IPv6LL Jan 17 12:01:16.023938 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:01:16.400450 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:01:16.404203 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:01:16.407383 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:01:16.407383 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:01:16.413854 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:01:16.417057 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:01:16.420241 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:01:16.420241 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:01:16.428897 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 17 12:01:16.931882 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:01:17.269987 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 17 12:01:17.269987 ignition[1389]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:01:17.276800 ignition[1389]: INFO : files: files passed Jan 17 12:01:17.276800 ignition[1389]: INFO : Ignition finished successfully Jan 17 12:01:17.287373 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:01:17.314823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:01:17.324797 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:01:17.333628 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:01:17.333839 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:01:17.365203 initrd-setup-root-after-ignition[1419]: grep: Jan 17 12:01:17.367430 initrd-setup-root-after-ignition[1423]: grep: Jan 17 12:01:17.367430 initrd-setup-root-after-ignition[1419]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:17.367430 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:17.375202 initrd-setup-root-after-ignition[1423]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:17.379590 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:01:17.386856 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:01:17.398649 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:01:17.454186 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:01:17.454679 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:01:17.458995 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:01:17.461092 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:01:17.468905 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:01:17.476728 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:01:17.505424 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:01:17.519554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:01:17.543540 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:17.546751 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:17.550311 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:01:17.555497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:01:17.555764 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:01:17.562656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:01:17.567330 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:01:17.569177 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:01:17.571485 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:01:17.578951 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:01:17.581565 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:01:17.583726 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:01:17.591784 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:01:17.594393 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:01:17.597478 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:01:17.599805 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:01:17.600034 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:01:17.608405 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:17.610725 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:17.614595 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:01:17.614805 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:17.623407 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:01:17.623634 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:01:17.629458 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:01:17.629700 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:01:17.632174 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:01:17.632404 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:01:17.645742 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:01:17.649843 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:01:17.650228 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:17.672755 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:01:17.677237 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:01:17.679880 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:17.685038 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:01:17.685344 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:01:17.704945 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:01:17.705277 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:01:17.718850 ignition[1443]: INFO : Ignition 2.19.0 Jan 17 12:01:17.721264 ignition[1443]: INFO : Stage: umount Jan 17 12:01:17.723392 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:17.725414 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:17.725414 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:17.730712 ignition[1443]: INFO : PUT result: OK Jan 17 12:01:17.738996 ignition[1443]: INFO : umount: umount passed Jan 17 12:01:17.742815 ignition[1443]: INFO : Ignition finished successfully Jan 17 12:01:17.745528 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:01:17.745907 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:01:17.753531 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:01:17.754249 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:01:17.754356 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:01:17.759658 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:01:17.759764 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:01:17.765068 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:01:17.765384 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:01:17.767775 systemd[1]: Stopped target network.target - Network. Jan 17 12:01:17.770598 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:01:17.770694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:01:17.772875 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:01:17.775574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:01:17.782710 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:17.785752 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:01:17.787435 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:01:17.789199 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:01:17.789281 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:01:17.791885 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:01:17.791959 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:01:17.792129 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:01:17.792212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:01:17.792724 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:01:17.792798 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:01:17.793920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:01:17.812625 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:01:17.815178 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:01:17.815376 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:01:17.823080 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:01:17.823214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:01:17.824565 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jan 17 12:01:17.828566 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:01:17.831007 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:01:17.834030 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:01:17.834233 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:01:17.840210 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:01:17.840331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:17.863981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:01:17.869176 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:01:17.869321 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:01:17.881996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:01:17.882100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:17.884213 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:01:17.884326 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:17.886338 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:01:17.886415 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:17.888842 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:17.916884 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:01:17.917269 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:01:17.931418 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:01:17.931913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:17.939194 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:01:17.939349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:17.943427 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:01:17.943504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:17.950946 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:01:17.951059 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:01:17.953384 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:01:17.953471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:01:17.963354 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:01:17.963451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:17.975591 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:01:17.979785 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:01:17.979902 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:17.982675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:01:17.982768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:18.020038 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:01:18.020479 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:01:18.026824 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:01:18.038635 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:01:18.064247 systemd[1]: Switching root. Jan 17 12:01:18.102272 systemd-journald[250]: Journal stopped Jan 17 12:01:20.216605 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 17 12:01:20.216746 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:01:20.216790 kernel: SELinux: policy capability open_perms=1 Jan 17 12:01:20.216821 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:01:20.216852 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:01:20.216884 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:01:20.216916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:01:20.216947 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:01:20.216979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:01:20.217014 kernel: audit: type=1403 audit(1737115278.615:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:01:20.217053 systemd[1]: Successfully loaded SELinux policy in 48.408ms. Jan 17 12:01:20.217099 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.096ms. Jan 17 12:01:20.217134 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:01:20.217167 systemd[1]: Detected virtualization amazon. Jan 17 12:01:20.217199 systemd[1]: Detected architecture arm64. Jan 17 12:01:20.217231 systemd[1]: Detected first boot. Jan 17 12:01:20.217265 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:01:20.217343 zram_generator::config[1485]: No configuration found. Jan 17 12:01:20.217388 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:01:20.217423 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:01:20.217456 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:01:20.217489 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:20.217522 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:01:20.217556 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:01:20.217586 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:01:20.217629 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:01:20.217667 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:01:20.217701 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:01:20.217733 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:01:20.217763 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:01:20.217797 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:20.217829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:20.217860 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:01:20.217890 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:01:20.217925 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:01:20.217959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:01:20.217990 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:01:20.218022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:20.218054 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:01:20.218086 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:01:20.218116 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:01:20.218147 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:01:20.218182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:20.218214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:01:20.218244 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:01:20.218275 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:01:20.218913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:01:20.218961 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:01:20.218996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:20.219051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:20.219087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:20.219130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:01:20.219165 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:01:20.219199 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:01:20.219233 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:01:20.219268 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:01:20.219356 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:01:20.219392 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:01:20.219426 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:01:20.219458 systemd[1]: Reached target machines.target - Containers. Jan 17 12:01:20.219495 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:01:20.219528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:20.219558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:01:20.219589 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:01:20.219619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:20.219649 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:01:20.219679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:01:20.219712 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:01:20.219748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:20.219779 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:01:20.219812 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:01:20.219844 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:01:20.219874 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:01:20.219904 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:01:20.219935 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:01:20.219965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:01:20.219996 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:01:20.220031 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:01:20.220062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:01:20.220093 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:01:20.220125 systemd[1]: Stopped verity-setup.service. Jan 17 12:01:20.220155 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:01:20.220185 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:01:20.220215 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:01:20.220245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:01:20.220285 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:01:20.220360 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:01:20.220393 kernel: loop: module loaded Jan 17 12:01:20.220426 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:20.220456 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:01:20.220487 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:01:20.220522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:20.220554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:20.220587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:01:20.220617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:01:20.224350 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:20.224421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:20.224457 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:20.224488 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:01:20.224527 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:01:20.224558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:01:20.224594 kernel: fuse: init (API version 7.39) Jan 17 12:01:20.224628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:01:20.224661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:01:20.224694 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:01:20.224728 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:01:20.224761 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:01:20.224842 systemd-journald[1574]: Collecting audit messages is disabled. Jan 17 12:01:20.224908 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:01:20.224938 systemd-journald[1574]: Journal started Jan 17 12:01:20.224986 systemd-journald[1574]: Runtime Journal (/run/log/journal/ec2ce154237271e8d3b2cfc296d9f7bf) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:01:20.229266 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:01:19.597929 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:01:19.623709 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:01:19.624502 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:01:20.237589 kernel: ACPI: bus type drm_connector registered Jan 17 12:01:20.237678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:20.251195 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:01:20.251394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:01:20.268350 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:01:20.284099 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:01:20.284219 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:01:20.289506 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:01:20.292338 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:01:20.292663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:01:20.295496 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:01:20.295828 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:01:20.301387 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:01:20.304364 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:01:20.357448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:20.381760 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:01:20.396188 kernel: loop0: detected capacity change from 0 to 189592 Jan 17 12:01:20.390960 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:01:20.393507 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:01:20.408691 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:01:20.427012 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:01:20.444619 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:01:20.455633 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:01:20.462366 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:01:20.479814 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:01:20.487634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:01:20.490634 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:01:20.516523 systemd-journald[1574]: Time spent on flushing to /var/log/journal/ec2ce154237271e8d3b2cfc296d9f7bf is 56.527ms for 916 entries. Jan 17 12:01:20.516523 systemd-journald[1574]: System Journal (/var/log/journal/ec2ce154237271e8d3b2cfc296d9f7bf) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:01:20.591947 systemd-journald[1574]: Received client request to flush runtime journal. Jan 17 12:01:20.592206 kernel: loop1: detected capacity change from 0 to 52536 Jan 17 12:01:20.516735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:20.530695 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:01:20.597465 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:01:20.603765 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:01:20.616762 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:01:20.630631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:01:20.693342 kernel: loop2: detected capacity change from 0 to 114328 Jan 17 12:01:20.707535 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jan 17 12:01:20.707567 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jan 17 12:01:20.725523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:20.758205 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 12:01:20.820685 kernel: loop4: detected capacity change from 0 to 189592 Jan 17 12:01:20.862965 kernel: loop5: detected capacity change from 0 to 52536 Jan 17 12:01:20.879358 kernel: loop6: detected capacity change from 0 to 114328 Jan 17 12:01:20.912789 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 12:01:20.932067 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:01:20.933070 (sd-merge)[1641]: Merged extensions into '/usr'. Jan 17 12:01:20.958256 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:01:20.958365 systemd[1]: Reloading... Jan 17 12:01:21.176570 zram_generator::config[1670]: No configuration found. Jan 17 12:01:21.225348 ldconfig[1593]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:01:21.474186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:21.596250 systemd[1]: Reloading finished in 636 ms. Jan 17 12:01:21.639356 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:01:21.644434 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:01:21.647223 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:01:21.665621 systemd[1]: Starting ensure-sysext.service... Jan 17 12:01:21.671639 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:01:21.683608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:21.693520 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:01:21.693558 systemd[1]: Reloading... Jan 17 12:01:21.755561 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:01:21.756243 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:01:21.758167 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:01:21.758724 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jan 17 12:01:21.758880 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Jan 17 12:01:21.764382 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Jan 17 12:01:21.764384 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:01:21.764402 systemd-tmpfiles[1721]: Skipping /boot Jan 17 12:01:21.797053 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:01:21.798231 systemd-tmpfiles[1721]: Skipping /boot Jan 17 12:01:21.889354 zram_generator::config[1748]: No configuration found. Jan 17 12:01:22.033986 (udev-worker)[1753]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:22.270334 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1786) Jan 17 12:01:22.296926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:22.447237 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:01:22.447800 systemd[1]: Reloading finished in 753 ms. Jan 17 12:01:22.477734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:22.481263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:22.542478 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:22.562848 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:01:22.570856 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:01:22.580860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:01:22.589807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:01:22.618119 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:01:22.626168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:22.692624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:22.705116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:22.714637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:01:22.721709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:22.724165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:22.732649 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:01:22.739460 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:01:22.755120 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:01:22.765148 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:01:22.786167 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:01:22.793128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:01:22.802849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:01:22.811470 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:01:22.815278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:22.816856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:22.820550 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:22.821418 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:22.849074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:01:22.850553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:01:22.860575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:22.875597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:22.882905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:01:22.889579 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:22.891616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:22.891716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:01:22.891798 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:01:22.896617 systemd[1]: Finished ensure-sysext.service. Jan 17 12:01:22.931289 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:01:22.948682 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:01:22.953378 lvm[1939]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:01:22.952926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:22.953241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:22.965446 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:01:22.983187 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:01:22.984680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:01:22.988796 augenrules[1959]: No rules Jan 17 12:01:22.988784 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:22.989431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:22.993551 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:23.001267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:01:23.002509 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:01:23.028041 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:01:23.032679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:23.046630 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:01:23.049267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:01:23.052215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:23.081183 lvm[1971]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:01:23.138384 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:01:23.187924 systemd-networkd[1917]: lo: Link UP Jan 17 12:01:23.187945 systemd-networkd[1917]: lo: Gained carrier Jan 17 12:01:23.189346 systemd-resolved[1918]: Positive Trust Anchors: Jan 17 12:01:23.189798 systemd-resolved[1918]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:01:23.189878 systemd-resolved[1918]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:01:23.190971 systemd-networkd[1917]: Enumeration completed Jan 17 12:01:23.191401 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:01:23.193600 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:23.193607 systemd-networkd[1917]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:01:23.197429 systemd-networkd[1917]: eth0: Link UP Jan 17 12:01:23.197781 systemd-networkd[1917]: eth0: Gained carrier Jan 17 12:01:23.197821 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:23.201675 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:01:23.206515 systemd-resolved[1918]: Defaulting to hostname 'linux'. Jan 17 12:01:23.210453 systemd-networkd[1917]: eth0: DHCPv4 address 172.31.18.94/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:01:23.213119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:01:23.216323 systemd[1]: Reached target network.target - Network. Jan 17 12:01:23.218413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:23.221526 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:01:23.223839 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:01:23.228803 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:01:23.231638 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:01:23.233796 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:01:23.236047 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:01:23.238322 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:01:23.238380 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:01:23.239998 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:01:23.242372 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:01:23.247286 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:01:23.260853 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:01:23.263953 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:01:23.266237 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:01:23.268134 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:01:23.269890 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:01:23.269944 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:01:23.272195 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:01:23.279684 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:01:23.290643 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:01:23.298559 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:01:23.311638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:01:23.315505 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:01:23.336875 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:01:23.344636 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:01:23.353465 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:01:23.361561 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:01:23.369111 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:01:23.377398 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:01:23.390657 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:01:23.405354 jq[1987]: false Jan 17 12:01:23.393568 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:01:23.395703 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:01:23.400660 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:01:23.406881 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:01:23.417168 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:01:23.420405 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:01:23.435048 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:01:23.437525 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:01:23.454427 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:01:23.490947 dbus-daemon[1986]: [system] SELinux support is enabled Jan 17 12:01:23.505430 dbus-daemon[1986]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1917 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:01:23.518888 (ntainerd)[2012]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:01:23.527981 jq[2001]: true Jan 17 12:01:23.521984 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:01:23.538344 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:01:23.543202 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:01:23.538419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:01:23.540837 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:01:23.540874 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:01:23.564844 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:01:23.579496 extend-filesystems[1988]: Found loop4 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found loop5 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found loop6 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found loop7 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p1 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p2 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p3 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found usr Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p4 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p6 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p7 Jan 17 12:01:23.579496 extend-filesystems[1988]: Found nvme0n1p9 Jan 17 12:01:23.579496 extend-filesystems[1988]: Checking size of /dev/nvme0n1p9 Jan 17 12:01:23.578527 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: ---------------------------------------------------- Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: corporation. Support and training for ntp-4 are Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: available at https://www.nwtime.org/support Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: ---------------------------------------------------- Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: proto: precision = 0.096 usec (-23) Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: basedate set to 2025-01-05 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: gps base set to 2025-01-05 (week 2348) Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Listen normally on 3 eth0 172.31.18.94:123 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: bind(21) AF_INET6 fe80::459:9fff:fe8d:cc05%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: unable to create socket on eth0 (5) for fe80::459:9fff:fe8d:cc05%2#123 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: failed to init interface for address fe80::459:9fff:fe8d:cc05%2 Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:23.686093 ntpd[1992]: 17 Jan 12:01:23 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:23.596891 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:01:23.699593 tar[2004]: linux-arm64/helm Jan 17 12:01:23.700008 update_engine[1999]: I20250117 12:01:23.664677 1999 main.cc:92] Flatcar Update Engine starting Jan 17 12:01:23.700008 update_engine[1999]: I20250117 12:01:23.691480 1999 update_check_scheduler.cc:74] Next update check in 3m18s Jan 17 12:01:23.578571 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:01:23.598409 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:01:23.702962 jq[2019]: true Jan 17 12:01:23.578591 ntpd[1992]: ---------------------------------------------------- Jan 17 12:01:23.688796 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:01:23.711629 extend-filesystems[1988]: Resized partition /dev/nvme0n1p9 Jan 17 12:01:23.578611 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:01:23.694941 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:01:23.578629 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:01:23.578647 ntpd[1992]: corporation. Support and training for ntp-4 are Jan 17 12:01:23.578665 ntpd[1992]: available at https://www.nwtime.org/support Jan 17 12:01:23.578684 ntpd[1992]: ---------------------------------------------------- Jan 17 12:01:23.589110 ntpd[1992]: proto: precision = 0.096 usec (-23) Jan 17 12:01:23.605743 ntpd[1992]: basedate set to 2025-01-05 Jan 17 12:01:23.605785 ntpd[1992]: gps base set to 2025-01-05 (week 2348) Jan 17 12:01:23.617520 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:01:23.617618 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:01:23.620535 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:01:23.725511 extend-filesystems[2037]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:01:23.620607 ntpd[1992]: Listen normally on 3 eth0 172.31.18.94:123 Jan 17 12:01:23.620676 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jan 17 12:01:23.620754 ntpd[1992]: bind(21) AF_INET6 fe80::459:9fff:fe8d:cc05%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:01:23.620793 ntpd[1992]: unable to create socket on eth0 (5) for fe80::459:9fff:fe8d:cc05%2#123 Jan 17 12:01:23.620820 ntpd[1992]: failed to init interface for address fe80::459:9fff:fe8d:cc05%2 Jan 17 12:01:23.620876 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jan 17 12:01:23.646369 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:23.646426 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:23.739366 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:01:23.782514 coreos-metadata[1985]: Jan 17 12:01:23.782 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:01:23.786476 coreos-metadata[1985]: Jan 17 12:01:23.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:01:23.787098 coreos-metadata[1985]: Jan 17 12:01:23.786 INFO Fetch successful Jan 17 12:01:23.787098 coreos-metadata[1985]: Jan 17 12:01:23.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:01:23.787791 coreos-metadata[1985]: Jan 17 12:01:23.787 INFO Fetch successful Jan 17 12:01:23.790615 coreos-metadata[1985]: Jan 17 12:01:23.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:01:23.791268 coreos-metadata[1985]: Jan 17 12:01:23.791 INFO Fetch successful Jan 17 12:01:23.791638 coreos-metadata[1985]: Jan 17 12:01:23.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:01:23.792195 coreos-metadata[1985]: Jan 17 12:01:23.791 INFO Fetch successful Jan 17 12:01:23.792608 coreos-metadata[1985]: Jan 17 12:01:23.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:01:23.793233 coreos-metadata[1985]: Jan 17 12:01:23.792 INFO Fetch failed with 404: resource not found Jan 17 12:01:23.793893 coreos-metadata[1985]: Jan 17 12:01:23.793 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:01:23.795349 coreos-metadata[1985]: Jan 17 12:01:23.794 INFO Fetch successful Jan 17 12:01:23.795635 coreos-metadata[1985]: Jan 17 12:01:23.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:01:23.797521 coreos-metadata[1985]: Jan 17 12:01:23.797 INFO Fetch successful Jan 17 12:01:23.797819 coreos-metadata[1985]: Jan 17 12:01:23.797 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:01:23.799635 coreos-metadata[1985]: Jan 17 12:01:23.797 INFO Fetch successful Jan 17 12:01:23.799635 coreos-metadata[1985]: Jan 17 12:01:23.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:01:23.802174 coreos-metadata[1985]: Jan 17 12:01:23.801 INFO Fetch successful Jan 17 12:01:23.802174 coreos-metadata[1985]: Jan 17 12:01:23.802 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:01:23.805335 coreos-metadata[1985]: Jan 17 12:01:23.803 INFO Fetch successful Jan 17 12:01:23.868399 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:01:23.874623 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:01:23.908370 extend-filesystems[2037]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:01:23.908370 extend-filesystems[2037]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:01:23.908370 extend-filesystems[2037]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:01:23.927186 extend-filesystems[1988]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:01:23.934483 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:01:23.937456 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:01:23.960967 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:01:23.963910 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:01:23.969336 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:01:23.971384 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:01:23.999907 systemd[1]: Starting sshkeys.service... Jan 17 12:01:24.036118 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:01:24.036965 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:01:24.038345 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:01:24.051414 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:01:24.051697 systemd-logind[1998]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:01:24.051730 systemd-logind[1998]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 12:01:24.054256 systemd-logind[1998]: New seat seat0. Jan 17 12:01:24.071983 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:01:24.077052 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:01:24.079441 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:01:24.164865 polkitd[2077]: Started polkitd version 121 Jan 17 12:01:24.203065 polkitd[2077]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:01:24.203200 polkitd[2077]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:01:24.211087 polkitd[2077]: Finished loading, compiling and executing 2 rules Jan 17 12:01:24.213011 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:01:24.214676 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:01:24.221598 polkitd[2077]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:01:24.236681 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1753) Jan 17 12:01:24.289219 systemd-resolved[1918]: System hostname changed to 'ip-172-31-18-94'. Jan 17 12:01:24.289222 systemd-hostnamed[2023]: Hostname set to (transient) Jan 17 12:01:24.295634 coreos-metadata[2076]: Jan 17 12:01:24.292 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:01:24.295634 coreos-metadata[2076]: Jan 17 12:01:24.293 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:01:24.295634 coreos-metadata[2076]: Jan 17 12:01:24.295 INFO Fetch successful Jan 17 12:01:24.295634 coreos-metadata[2076]: Jan 17 12:01:24.295 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:01:24.296473 coreos-metadata[2076]: Jan 17 12:01:24.296 INFO Fetch successful Jan 17 12:01:24.302128 locksmithd[2034]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:01:24.303157 unknown[2076]: wrote ssh authorized keys file for user: core Jan 17 12:01:24.356817 update-ssh-keys[2133]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:01:24.360498 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:01:24.370330 systemd[1]: Finished sshkeys.service. Jan 17 12:01:24.424164 containerd[2012]: time="2025-01-17T12:01:24.423975237Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:01:24.539476 systemd-networkd[1917]: eth0: Gained IPv6LL Jan 17 12:01:24.572496 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:01:24.583918 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:01:24.594859 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:01:24.634806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:24.641803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:01:24.682818 containerd[2012]: time="2025-01-17T12:01:24.680531710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689403022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689470690Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689507074Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689807674Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689846218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689960170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.689988598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.690407 containerd[2012]: time="2025-01-17T12:01:24.690263974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:24.693541 containerd[2012]: time="2025-01-17T12:01:24.690351310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.693541 containerd[2012]: time="2025-01-17T12:01:24.690870502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:24.693541 containerd[2012]: time="2025-01-17T12:01:24.690898234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.693541 containerd[2012]: time="2025-01-17T12:01:24.691104070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.697174 containerd[2012]: time="2025-01-17T12:01:24.697117414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:24.701347 containerd[2012]: time="2025-01-17T12:01:24.697604290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:24.701347 containerd[2012]: time="2025-01-17T12:01:24.697649674Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:01:24.701347 containerd[2012]: time="2025-01-17T12:01:24.697850338Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:01:24.701347 containerd[2012]: time="2025-01-17T12:01:24.697952986Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:01:24.722770 containerd[2012]: time="2025-01-17T12:01:24.721930378Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:01:24.722770 containerd[2012]: time="2025-01-17T12:01:24.722185222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:01:24.722770 containerd[2012]: time="2025-01-17T12:01:24.722223142Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:01:24.722770 containerd[2012]: time="2025-01-17T12:01:24.722336470Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:01:24.722770 containerd[2012]: time="2025-01-17T12:01:24.722377018Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:01:24.729610 containerd[2012]: time="2025-01-17T12:01:24.723194446Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:01:24.736567 containerd[2012]: time="2025-01-17T12:01:24.736474846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737515042Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737592346Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737631058Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737706574Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737767258Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737805010Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737862094Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737898058Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737952358Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.737998006Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.738056518Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.738123562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.738199 containerd[2012]: time="2025-01-17T12:01:24.738160342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.742822 containerd[2012]: time="2025-01-17T12:01:24.738806278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.742822 containerd[2012]: time="2025-01-17T12:01:24.742126738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.742822 containerd[2012]: time="2025-01-17T12:01:24.742203358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.742822 containerd[2012]: time="2025-01-17T12:01:24.742248586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.744683 containerd[2012]: time="2025-01-17T12:01:24.744612022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.744862 containerd[2012]: time="2025-01-17T12:01:24.744833410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.745007 containerd[2012]: time="2025-01-17T12:01:24.744978430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.745236 containerd[2012]: time="2025-01-17T12:01:24.745207990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.745385 containerd[2012]: time="2025-01-17T12:01:24.745357594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.745618690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.745666978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.745733038Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.745783954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.745815082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.745871386Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747750766Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747817462Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747848242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747878698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747903826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747936670Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747961390Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:01:24.749349 containerd[2012]: time="2025-01-17T12:01:24.747989194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:01:24.750075 containerd[2012]: time="2025-01-17T12:01:24.748562866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:01:24.750075 containerd[2012]: time="2025-01-17T12:01:24.748674142Z" level=info msg="Connect containerd service" Jan 17 12:01:24.760164 containerd[2012]: time="2025-01-17T12:01:24.759484834Z" level=info msg="using legacy CRI server" Jan 17 12:01:24.760164 containerd[2012]: time="2025-01-17T12:01:24.759542566Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:01:24.760164 containerd[2012]: time="2025-01-17T12:01:24.759712090Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.760883782Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.761756854Z" level=info msg="Start subscribing containerd event" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.761821750Z" level=info msg="Start recovering state" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.761932714Z" level=info msg="Start event monitor" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.761956042Z" level=info msg="Start snapshots syncer" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.761977150Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:01:24.764591 containerd[2012]: time="2025-01-17T12:01:24.762002050Z" level=info msg="Start streaming server" Jan 17 12:01:24.769364 containerd[2012]: time="2025-01-17T12:01:24.765587158Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:01:24.769364 containerd[2012]: time="2025-01-17T12:01:24.765717682Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:01:24.769364 containerd[2012]: time="2025-01-17T12:01:24.767743990Z" level=info msg="containerd successfully booted in 0.346817s" Jan 17 12:01:24.765939 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:01:24.784334 sshd_keygen[2006]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:01:24.825756 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:01:24.840414 amazon-ssm-agent[2185]: Initializing new seelog logger Jan 17 12:01:24.841159 amazon-ssm-agent[2185]: New Seelog Logger Creation Complete Jan 17 12:01:24.841411 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.841513 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.842315 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 processing appconfig overrides Jan 17 12:01:24.843238 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.843238 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.843238 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 processing appconfig overrides Jan 17 12:01:24.844088 amazon-ssm-agent[2185]: 2025-01-17 12:01:24 INFO Proxy environment variables: Jan 17 12:01:24.844712 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.844712 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.844921 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 processing appconfig overrides Jan 17 12:01:24.851272 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.852401 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:24.852634 amazon-ssm-agent[2185]: 2025/01/17 12:01:24 processing appconfig overrides Jan 17 12:01:24.897188 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:01:24.912960 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:01:24.917646 systemd[1]: Started sshd@0-172.31.18.94:22-139.178.68.195:51960.service - OpenSSH per-connection server daemon (139.178.68.195:51960). Jan 17 12:01:24.949386 amazon-ssm-agent[2185]: 2025-01-17 12:01:24 INFO https_proxy: Jan 17 12:01:24.974180 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:01:24.974663 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:01:24.988889 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:01:25.046896 amazon-ssm-agent[2185]: 2025-01-17 12:01:24 INFO http_proxy: Jan 17 12:01:25.060569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:01:25.072918 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:01:25.089568 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:01:25.092053 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:01:25.146681 amazon-ssm-agent[2185]: 2025-01-17 12:01:24 INFO no_proxy: Jan 17 12:01:25.185072 sshd[2217]: Accepted publickey for core from 139.178.68.195 port 51960 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:25.189722 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:25.219836 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:01:25.232197 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:01:25.244992 amazon-ssm-agent[2185]: 2025-01-17 12:01:24 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:01:25.247345 systemd-logind[1998]: New session 1 of user core. Jan 17 12:01:25.284912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:01:25.304009 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:01:25.324650 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:01:25.345633 amazon-ssm-agent[2185]: 2025-01-17 12:01:24 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:01:25.445377 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO Agent will take identity from EC2 Jan 17 12:01:25.544381 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [Registrar] Starting registrar module Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:01:25.594344 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:01:25.644337 amazon-ssm-agent[2185]: 2025-01-17 12:01:25 INFO [CredentialRefresher] Next credential rotation will be in 31.0249896658 minutes Jan 17 12:01:25.653061 systemd[2230]: Queued start job for default target default.target. Jan 17 12:01:25.661411 systemd[2230]: Created slice app.slice - User Application Slice. Jan 17 12:01:25.661475 systemd[2230]: Reached target paths.target - Paths. Jan 17 12:01:25.661508 systemd[2230]: Reached target timers.target - Timers. Jan 17 12:01:25.666791 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:01:25.703743 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:01:25.703983 systemd[2230]: Reached target sockets.target - Sockets. Jan 17 12:01:25.704017 systemd[2230]: Reached target basic.target - Basic System. Jan 17 12:01:25.704117 systemd[2230]: Reached target default.target - Main User Target. Jan 17 12:01:25.704433 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:01:25.706807 systemd[2230]: Startup finished in 359ms. Jan 17 12:01:25.714742 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:01:25.735494 tar[2004]: linux-arm64/LICENSE Jan 17 12:01:25.737653 tar[2004]: linux-arm64/README.md Jan 17 12:01:25.754001 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:01:25.884801 systemd[1]: Started sshd@1-172.31.18.94:22-139.178.68.195:35018.service - OpenSSH per-connection server daemon (139.178.68.195:35018). Jan 17 12:01:26.085330 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 35018 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:26.087242 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:26.094887 systemd-logind[1998]: New session 2 of user core. Jan 17 12:01:26.106587 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:01:26.240605 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:26.248174 systemd[1]: sshd@1-172.31.18.94:22-139.178.68.195:35018.service: Deactivated successfully. Jan 17 12:01:26.248372 systemd-logind[1998]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:01:26.252225 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:01:26.256836 systemd-logind[1998]: Removed session 2. Jan 17 12:01:26.277890 systemd[1]: Started sshd@2-172.31.18.94:22-139.178.68.195:35028.service - OpenSSH per-connection server daemon (139.178.68.195:35028). Jan 17 12:01:26.448074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:26.451822 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:01:26.455545 systemd[1]: Startup finished in 1.153s (kernel) + 7.818s (initrd) + 7.886s (userspace) = 16.859s. Jan 17 12:01:26.463001 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:26.464723 sshd[2251]: Accepted publickey for core from 139.178.68.195 port 35028 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:26.468750 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:26.485753 systemd-logind[1998]: New session 3 of user core. Jan 17 12:01:26.498598 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:01:26.579254 ntpd[1992]: Listen normally on 6 eth0 [fe80::459:9fff:fe8d:cc05%2]:123 Jan 17 12:01:26.580282 ntpd[1992]: 17 Jan 12:01:26 ntpd[1992]: Listen normally on 6 eth0 [fe80::459:9fff:fe8d:cc05%2]:123 Jan 17 12:01:26.630334 amazon-ssm-agent[2185]: 2025-01-17 12:01:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:01:26.631174 sshd[2251]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:26.639931 systemd[1]: sshd@2-172.31.18.94:22-139.178.68.195:35028.service: Deactivated successfully. Jan 17 12:01:26.645498 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:01:26.650402 systemd-logind[1998]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:01:26.656224 systemd-logind[1998]: Removed session 3. Jan 17 12:01:26.731668 amazon-ssm-agent[2185]: 2025-01-17 12:01:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2266) started Jan 17 12:01:26.833569 amazon-ssm-agent[2185]: 2025-01-17 12:01:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:01:27.453670 kubelet[2258]: E0117 12:01:27.453576 2258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:27.458096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:27.458891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:27.459421 systemd[1]: kubelet.service: Consumed 1.259s CPU time. Jan 17 12:01:30.097123 systemd-resolved[1918]: Clock change detected. Flushing caches. Jan 17 12:01:36.184080 systemd[1]: Started sshd@3-172.31.18.94:22-139.178.68.195:55066.service - OpenSSH per-connection server daemon (139.178.68.195:55066). Jan 17 12:01:36.365029 sshd[2287]: Accepted publickey for core from 139.178.68.195 port 55066 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:36.367697 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:36.377351 systemd-logind[1998]: New session 4 of user core. Jan 17 12:01:36.382406 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:01:36.507490 sshd[2287]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:36.513730 systemd[1]: sshd@3-172.31.18.94:22-139.178.68.195:55066.service: Deactivated successfully. Jan 17 12:01:36.517012 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:01:36.519426 systemd-logind[1998]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:01:36.522063 systemd-logind[1998]: Removed session 4. Jan 17 12:01:36.542638 systemd[1]: Started sshd@4-172.31.18.94:22-139.178.68.195:55074.service - OpenSSH per-connection server daemon (139.178.68.195:55074). Jan 17 12:01:36.717718 sshd[2294]: Accepted publickey for core from 139.178.68.195 port 55074 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:36.720328 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:36.727704 systemd-logind[1998]: New session 5 of user core. Jan 17 12:01:36.737366 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:01:36.854856 sshd[2294]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:36.861291 systemd[1]: sshd@4-172.31.18.94:22-139.178.68.195:55074.service: Deactivated successfully. Jan 17 12:01:36.866799 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:01:36.868188 systemd-logind[1998]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:01:36.869818 systemd-logind[1998]: Removed session 5. Jan 17 12:01:36.892632 systemd[1]: Started sshd@5-172.31.18.94:22-139.178.68.195:55088.service - OpenSSH per-connection server daemon (139.178.68.195:55088). Jan 17 12:01:37.013653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:37.022530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:37.071891 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 55088 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:37.075239 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:37.088039 systemd-logind[1998]: New session 6 of user core. Jan 17 12:01:37.098473 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:01:37.229425 sshd[2301]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:37.236552 systemd[1]: sshd@5-172.31.18.94:22-139.178.68.195:55088.service: Deactivated successfully. Jan 17 12:01:37.243194 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:01:37.250142 systemd-logind[1998]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:01:37.270764 systemd[1]: Started sshd@6-172.31.18.94:22-139.178.68.195:55102.service - OpenSSH per-connection server daemon (139.178.68.195:55102). Jan 17 12:01:37.277640 systemd-logind[1998]: Removed session 6. Jan 17 12:01:37.350397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:37.361956 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:37.443283 kubelet[2318]: E0117 12:01:37.443191 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:37.451590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:37.451935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:37.458929 sshd[2311]: Accepted publickey for core from 139.178.68.195 port 55102 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:37.461672 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:37.469983 systemd-logind[1998]: New session 7 of user core. Jan 17 12:01:37.477393 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:01:37.593802 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:01:37.594516 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:37.613161 sudo[2326]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:37.636845 sshd[2311]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:37.644358 systemd[1]: sshd@6-172.31.18.94:22-139.178.68.195:55102.service: Deactivated successfully. Jan 17 12:01:37.648865 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:01:37.650614 systemd-logind[1998]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:01:37.652343 systemd-logind[1998]: Removed session 7. Jan 17 12:01:37.689870 systemd[1]: Started sshd@7-172.31.18.94:22-139.178.68.195:55118.service - OpenSSH per-connection server daemon (139.178.68.195:55118). Jan 17 12:01:37.856926 sshd[2331]: Accepted publickey for core from 139.178.68.195 port 55118 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:37.860095 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:37.869448 systemd-logind[1998]: New session 8 of user core. Jan 17 12:01:37.873368 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:01:37.976691 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:01:37.977879 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:37.984348 sudo[2335]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:37.994788 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:01:37.995512 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:38.025595 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:38.029815 auditctl[2338]: No rules Jan 17 12:01:38.031391 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:01:38.031824 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:38.035588 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:38.091321 augenrules[2356]: No rules Jan 17 12:01:38.092871 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:38.095644 sudo[2334]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:38.119435 sshd[2331]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:38.125581 systemd[1]: sshd@7-172.31.18.94:22-139.178.68.195:55118.service: Deactivated successfully. Jan 17 12:01:38.126045 systemd-logind[1998]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:01:38.129160 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:01:38.133595 systemd-logind[1998]: Removed session 8. Jan 17 12:01:38.159602 systemd[1]: Started sshd@8-172.31.18.94:22-139.178.68.195:55132.service - OpenSSH per-connection server daemon (139.178.68.195:55132). Jan 17 12:01:38.329475 sshd[2364]: Accepted publickey for core from 139.178.68.195 port 55132 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:38.332222 sshd[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:38.340439 systemd-logind[1998]: New session 9 of user core. Jan 17 12:01:38.349444 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:01:38.453175 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:01:38.453826 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:38.891584 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:01:38.892697 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:01:39.251509 dockerd[2382]: time="2025-01-17T12:01:39.251330217Z" level=info msg="Starting up" Jan 17 12:01:39.392316 dockerd[2382]: time="2025-01-17T12:01:39.391871098Z" level=info msg="Loading containers: start." Jan 17 12:01:39.549372 kernel: Initializing XFRM netlink socket Jan 17 12:01:39.585011 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:39.665269 systemd-networkd[1917]: docker0: Link UP Jan 17 12:01:39.692516 dockerd[2382]: time="2025-01-17T12:01:39.692369051Z" level=info msg="Loading containers: done." Jan 17 12:01:39.714875 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2608439749-merged.mount: Deactivated successfully. Jan 17 12:01:39.721608 dockerd[2382]: time="2025-01-17T12:01:39.721529231Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:01:39.721822 dockerd[2382]: time="2025-01-17T12:01:39.721709351Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:01:39.721956 dockerd[2382]: time="2025-01-17T12:01:39.721915187Z" level=info msg="Daemon has completed initialization" Jan 17 12:01:39.784589 dockerd[2382]: time="2025-01-17T12:01:39.784414008Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:01:39.785712 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:01:40.881835 containerd[2012]: time="2025-01-17T12:01:40.881642569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 17 12:01:41.517678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720466507.mount: Deactivated successfully. Jan 17 12:01:43.061152 containerd[2012]: time="2025-01-17T12:01:43.060197112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:43.063740 containerd[2012]: time="2025-01-17T12:01:43.063682104Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 17 12:01:43.066317 containerd[2012]: time="2025-01-17T12:01:43.066254784Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:43.071483 containerd[2012]: time="2025-01-17T12:01:43.071430504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:43.074144 containerd[2012]: time="2025-01-17T12:01:43.073806384Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.192086271s" Jan 17 12:01:43.074144 containerd[2012]: time="2025-01-17T12:01:43.073865388Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 17 12:01:43.075025 containerd[2012]: time="2025-01-17T12:01:43.074981328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 17 12:01:45.112164 containerd[2012]: time="2025-01-17T12:01:45.111784382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:45.114013 containerd[2012]: time="2025-01-17T12:01:45.113904926Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 17 12:01:45.115754 containerd[2012]: time="2025-01-17T12:01:45.115666394Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:45.121566 containerd[2012]: time="2025-01-17T12:01:45.121461134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:45.124184 containerd[2012]: time="2025-01-17T12:01:45.123963194Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.048757094s" Jan 17 12:01:45.124184 containerd[2012]: time="2025-01-17T12:01:45.124021922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 17 12:01:45.125056 containerd[2012]: time="2025-01-17T12:01:45.125016434Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 17 12:01:46.379859 containerd[2012]: time="2025-01-17T12:01:46.379778080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:46.381575 containerd[2012]: time="2025-01-17T12:01:46.381489916Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 17 12:01:46.382472 containerd[2012]: time="2025-01-17T12:01:46.382416160Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:46.388925 containerd[2012]: time="2025-01-17T12:01:46.388842304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:46.391392 containerd[2012]: time="2025-01-17T12:01:46.391217932Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.265956302s" Jan 17 12:01:46.391392 containerd[2012]: time="2025-01-17T12:01:46.391272856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 17 12:01:46.393211 containerd[2012]: time="2025-01-17T12:01:46.393163252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:01:47.514022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:01:47.523590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:47.811436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056723.mount: Deactivated successfully. Jan 17 12:01:47.875421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:47.886656 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:47.972245 kubelet[2599]: E0117 12:01:47.971532 2599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:47.974617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:47.974920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:48.449012 containerd[2012]: time="2025-01-17T12:01:48.448943131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:48.450734 containerd[2012]: time="2025-01-17T12:01:48.450664051Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 17 12:01:48.451922 containerd[2012]: time="2025-01-17T12:01:48.451848091Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:48.455596 containerd[2012]: time="2025-01-17T12:01:48.455528323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:48.457558 containerd[2012]: time="2025-01-17T12:01:48.457347967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 2.064123311s" Jan 17 12:01:48.457558 containerd[2012]: time="2025-01-17T12:01:48.457406875Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 17 12:01:48.458508 containerd[2012]: time="2025-01-17T12:01:48.458305615Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:01:48.991144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418587158.mount: Deactivated successfully. Jan 17 12:01:50.080585 containerd[2012]: time="2025-01-17T12:01:50.080495023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.082785 containerd[2012]: time="2025-01-17T12:01:50.082724863Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:01:50.083943 containerd[2012]: time="2025-01-17T12:01:50.083869771Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.089662 containerd[2012]: time="2025-01-17T12:01:50.089561707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.092150 containerd[2012]: time="2025-01-17T12:01:50.091941763Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.633579928s" Jan 17 12:01:50.092150 containerd[2012]: time="2025-01-17T12:01:50.091998091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:01:50.093688 containerd[2012]: time="2025-01-17T12:01:50.093555319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 12:01:50.631812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029261329.mount: Deactivated successfully. Jan 17 12:01:50.639744 containerd[2012]: time="2025-01-17T12:01:50.639487558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.640647 containerd[2012]: time="2025-01-17T12:01:50.640587298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 12:01:50.641762 containerd[2012]: time="2025-01-17T12:01:50.641670358Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.647254 containerd[2012]: time="2025-01-17T12:01:50.647148394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:50.649187 containerd[2012]: time="2025-01-17T12:01:50.648831766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 554.970027ms" Jan 17 12:01:50.649187 containerd[2012]: time="2025-01-17T12:01:50.648888646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 12:01:50.651074 containerd[2012]: time="2025-01-17T12:01:50.650790622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 17 12:01:51.184447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975509915.mount: Deactivated successfully. Jan 17 12:01:53.840095 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:01:53.861894 containerd[2012]: time="2025-01-17T12:01:53.861833582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:53.864795 containerd[2012]: time="2025-01-17T12:01:53.864738446Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 17 12:01:53.866295 containerd[2012]: time="2025-01-17T12:01:53.866221622Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:53.873067 containerd[2012]: time="2025-01-17T12:01:53.872985842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:53.875562 containerd[2012]: time="2025-01-17T12:01:53.875513078Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.224668864s" Jan 17 12:01:53.878397 containerd[2012]: time="2025-01-17T12:01:53.875680442Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 17 12:01:58.013595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:01:58.022561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:58.342546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:58.346020 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:58.421012 kubelet[2741]: E0117 12:01:58.420926 2741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:58.424853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:58.425984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:00.451649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:00.460647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:00.522026 systemd[1]: Reloading requested from client PID 2755 ('systemctl') (unit session-9.scope)... Jan 17 12:02:00.522069 systemd[1]: Reloading... Jan 17 12:02:00.780236 zram_generator::config[2801]: No configuration found. Jan 17 12:02:00.985330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:01.153894 systemd[1]: Reloading finished in 631 ms. Jan 17 12:02:01.252854 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:02:01.253054 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:02:01.254241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:01.261688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:01.547384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:01.569017 (kubelet)[2858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:01.639050 kubelet[2858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:01.640427 kubelet[2858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:01.640839 kubelet[2858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:01.642173 kubelet[2858]: I0117 12:02:01.641095 2858 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:02.886188 kubelet[2858]: I0117 12:02:02.886082 2858 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:02:02.886188 kubelet[2858]: I0117 12:02:02.886168 2858 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:02.886811 kubelet[2858]: I0117 12:02:02.886575 2858 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:02:02.923597 kubelet[2858]: E0117 12:02:02.923510 2858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:02.930135 kubelet[2858]: I0117 12:02:02.929509 2858 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:02.946053 kubelet[2858]: E0117 12:02:02.946004 2858 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:02:02.946313 kubelet[2858]: I0117 12:02:02.946289 2858 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:02:02.953014 kubelet[2858]: I0117 12:02:02.952976 2858 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:02.954131 kubelet[2858]: I0117 12:02:02.953418 2858 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:02:02.954131 kubelet[2858]: I0117 12:02:02.953695 2858 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:02.954131 kubelet[2858]: I0117 12:02:02.953739 2858 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-94","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:02:02.954131 kubelet[2858]: I0117 12:02:02.954024 2858 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:02.954457 kubelet[2858]: I0117 12:02:02.954043 2858 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:02:02.954672 kubelet[2858]: I0117 12:02:02.954650 2858 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:02.958474 kubelet[2858]: I0117 12:02:02.958441 2858 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:02:02.958644 kubelet[2858]: I0117 12:02:02.958624 2858 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:02.958785 kubelet[2858]: I0117 12:02:02.958767 2858 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:02:02.958894 kubelet[2858]: I0117 12:02:02.958875 2858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:02.966041 kubelet[2858]: W0117 12:02:02.965946 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-94&limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:02.966347 kubelet[2858]: E0117 12:02:02.966057 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-94&limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:02.967183 kubelet[2858]: W0117 12:02:02.967078 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:02.968232 kubelet[2858]: E0117 12:02:02.967196 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:02.968232 kubelet[2858]: I0117 12:02:02.967345 2858 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:02.971549 kubelet[2858]: I0117 12:02:02.971488 2858 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:02.972794 kubelet[2858]: W0117 12:02:02.972753 2858 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:02:02.974384 kubelet[2858]: I0117 12:02:02.974307 2858 server.go:1269] "Started kubelet" Jan 17 12:02:02.975907 kubelet[2858]: I0117 12:02:02.975834 2858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:02.978419 kubelet[2858]: I0117 12:02:02.978362 2858 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:02:02.981951 kubelet[2858]: I0117 12:02:02.981231 2858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:02.981951 kubelet[2858]: I0117 12:02:02.981692 2858 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:02.983636 kubelet[2858]: I0117 12:02:02.983579 2858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:02.985430 kubelet[2858]: E0117 12:02:02.982696 2858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.94:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.94:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-94.181b792ebfc4f173 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-94,UID:ip-172-31-18-94,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-94,},FirstTimestamp:2025-01-17 12:02:02.974269811 +0000 UTC m=+1.398319940,LastTimestamp:2025-01-17 12:02:02.974269811 +0000 UTC m=+1.398319940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-94,}" Jan 17 12:02:02.988575 kubelet[2858]: I0117 12:02:02.987650 2858 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:02:02.991962 kubelet[2858]: I0117 12:02:02.991909 2858 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:02:02.993246 kubelet[2858]: E0117 12:02:02.992480 2858 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-94\" not found" Jan 17 12:02:02.993246 kubelet[2858]: I0117 12:02:02.992976 2858 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:02:02.993246 kubelet[2858]: I0117 12:02:02.993075 2858 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:02:02.994904 kubelet[2858]: W0117 12:02:02.994830 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:02.995178 kubelet[2858]: E0117 12:02:02.995145 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:02.995451 kubelet[2858]: E0117 12:02:02.995393 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-94?timeout=10s\": dial tcp 172.31.18.94:6443: connect: connection refused" interval="200ms" Jan 17 12:02:02.995907 kubelet[2858]: I0117 12:02:02.995875 2858 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:02.996230 kubelet[2858]: I0117 12:02:02.996198 2858 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:02.996937 kubelet[2858]: E0117 12:02:02.996903 2858 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:02.999291 kubelet[2858]: I0117 12:02:02.999256 2858 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:03.025657 kubelet[2858]: I0117 12:02:03.025564 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:03.032704 kubelet[2858]: I0117 12:02:03.032644 2858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:03.032704 kubelet[2858]: I0117 12:02:03.032696 2858 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:03.032893 kubelet[2858]: I0117 12:02:03.032730 2858 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:02:03.032893 kubelet[2858]: E0117 12:02:03.032802 2858 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:03.044011 kubelet[2858]: W0117 12:02:03.043587 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:03.044011 kubelet[2858]: E0117 12:02:03.043686 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:03.045517 kubelet[2858]: I0117 12:02:03.045177 2858 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:03.045517 kubelet[2858]: I0117 12:02:03.045204 2858 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:03.045517 kubelet[2858]: I0117 12:02:03.045234 2858 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:03.048755 kubelet[2858]: I0117 12:02:03.048704 2858 policy_none.go:49] "None policy: Start" Jan 17 12:02:03.050801 kubelet[2858]: I0117 12:02:03.050327 2858 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:03.050801 kubelet[2858]: I0117 12:02:03.050370 2858 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:03.061879 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:02:03.079232 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:02:03.086328 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:02:03.093659 kubelet[2858]: E0117 12:02:03.093595 2858 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-94\" not found" Jan 17 12:02:03.101044 kubelet[2858]: I0117 12:02:03.100729 2858 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:03.101044 kubelet[2858]: I0117 12:02:03.101027 2858 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:02:03.101350 kubelet[2858]: I0117 12:02:03.101047 2858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:02:03.102822 kubelet[2858]: I0117 12:02:03.101745 2858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:03.105244 kubelet[2858]: E0117 12:02:03.105156 2858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-94\" not found" Jan 17 12:02:03.151598 systemd[1]: Created slice kubepods-burstable-pod267d53862f2bcc5e04792d90fa6139c9.slice - libcontainer container kubepods-burstable-pod267d53862f2bcc5e04792d90fa6139c9.slice. Jan 17 12:02:03.171948 systemd[1]: Created slice kubepods-burstable-podf7c707d8de481cfeaf9f5555f4cc5ba2.slice - libcontainer container kubepods-burstable-podf7c707d8de481cfeaf9f5555f4cc5ba2.slice. Jan 17 12:02:03.182323 systemd[1]: Created slice kubepods-burstable-pode7838a7bde62acdbc823f2ad321b8508.slice - libcontainer container kubepods-burstable-pode7838a7bde62acdbc823f2ad321b8508.slice. Jan 17 12:02:03.194503 kubelet[2858]: I0117 12:02:03.194449 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/267d53862f2bcc5e04792d90fa6139c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-94\" (UID: \"267d53862f2bcc5e04792d90fa6139c9\") " pod="kube-system/kube-apiserver-ip-172-31-18-94" Jan 17 12:02:03.194885 kubelet[2858]: I0117 12:02:03.194512 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:03.194885 kubelet[2858]: I0117 12:02:03.194554 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:03.194885 kubelet[2858]: I0117 12:02:03.194591 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7838a7bde62acdbc823f2ad321b8508-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-94\" (UID: \"e7838a7bde62acdbc823f2ad321b8508\") " pod="kube-system/kube-scheduler-ip-172-31-18-94" Jan 17 12:02:03.194885 kubelet[2858]: I0117 12:02:03.194628 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/267d53862f2bcc5e04792d90fa6139c9-ca-certs\") pod \"kube-apiserver-ip-172-31-18-94\" (UID: \"267d53862f2bcc5e04792d90fa6139c9\") " pod="kube-system/kube-apiserver-ip-172-31-18-94" Jan 17 12:02:03.194885 kubelet[2858]: I0117 12:02:03.194664 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/267d53862f2bcc5e04792d90fa6139c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-94\" (UID: \"267d53862f2bcc5e04792d90fa6139c9\") " pod="kube-system/kube-apiserver-ip-172-31-18-94" Jan 17 12:02:03.195238 kubelet[2858]: I0117 12:02:03.194717 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:03.195238 kubelet[2858]: I0117 12:02:03.194753 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:03.195238 kubelet[2858]: I0117 12:02:03.194814 2858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:03.197272 kubelet[2858]: E0117 12:02:03.197204 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-94?timeout=10s\": dial tcp 172.31.18.94:6443: connect: connection refused" interval="400ms" Jan 17 12:02:03.203606 kubelet[2858]: I0117 12:02:03.203553 2858 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-94" Jan 17 12:02:03.204368 kubelet[2858]: E0117 12:02:03.204300 2858 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.94:6443/api/v1/nodes\": dial tcp 172.31.18.94:6443: connect: connection refused" node="ip-172-31-18-94" Jan 17 12:02:03.407804 kubelet[2858]: I0117 12:02:03.407521 2858 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-94" Jan 17 12:02:03.408919 kubelet[2858]: E0117 12:02:03.408694 2858 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.94:6443/api/v1/nodes\": dial tcp 172.31.18.94:6443: connect: connection refused" node="ip-172-31-18-94" Jan 17 12:02:03.467833 containerd[2012]: time="2025-01-17T12:02:03.467470617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-94,Uid:267d53862f2bcc5e04792d90fa6139c9,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:03.479055 containerd[2012]: time="2025-01-17T12:02:03.478989225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-94,Uid:f7c707d8de481cfeaf9f5555f4cc5ba2,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:03.488669 containerd[2012]: time="2025-01-17T12:02:03.488293077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-94,Uid:e7838a7bde62acdbc823f2ad321b8508,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:03.597729 kubelet[2858]: E0117 12:02:03.597673 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-94?timeout=10s\": dial tcp 172.31.18.94:6443: connect: connection refused" interval="800ms" Jan 17 12:02:03.810918 kubelet[2858]: I0117 12:02:03.810860 2858 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-94" Jan 17 12:02:03.811480 kubelet[2858]: E0117 12:02:03.811435 2858 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.94:6443/api/v1/nodes\": dial tcp 172.31.18.94:6443: connect: connection refused" node="ip-172-31-18-94" Jan 17 12:02:04.064212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816680550.mount: Deactivated successfully. Jan 17 12:02:04.071149 containerd[2012]: time="2025-01-17T12:02:04.070345508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:04.074088 containerd[2012]: time="2025-01-17T12:02:04.073965272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:02:04.075364 containerd[2012]: time="2025-01-17T12:02:04.075277724Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:04.077262 containerd[2012]: time="2025-01-17T12:02:04.077191040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:04.078407 containerd[2012]: time="2025-01-17T12:02:04.078220352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:04.079547 containerd[2012]: time="2025-01-17T12:02:04.079313396Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:04.080629 containerd[2012]: time="2025-01-17T12:02:04.080530580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:04.083928 containerd[2012]: time="2025-01-17T12:02:04.083872676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:04.090093 containerd[2012]: time="2025-01-17T12:02:04.089399324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 621.815847ms" Jan 17 12:02:04.102830 containerd[2012]: time="2025-01-17T12:02:04.102745088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 623.645991ms" Jan 17 12:02:04.124663 containerd[2012]: time="2025-01-17T12:02:04.124573269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 636.165412ms" Jan 17 12:02:04.252911 kubelet[2858]: W0117 12:02:04.252799 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:04.253650 kubelet[2858]: E0117 12:02:04.252914 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:04.264249 containerd[2012]: time="2025-01-17T12:02:04.263553657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:04.264249 containerd[2012]: time="2025-01-17T12:02:04.263684769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:04.264249 containerd[2012]: time="2025-01-17T12:02:04.263722065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:04.269429 containerd[2012]: time="2025-01-17T12:02:04.264262053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:04.271144 containerd[2012]: time="2025-01-17T12:02:04.270193833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:04.271144 containerd[2012]: time="2025-01-17T12:02:04.270356709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:04.271144 containerd[2012]: time="2025-01-17T12:02:04.270397545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:04.271144 containerd[2012]: time="2025-01-17T12:02:04.270590421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:04.274064 containerd[2012]: time="2025-01-17T12:02:04.273464793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:04.274437 containerd[2012]: time="2025-01-17T12:02:04.274014693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:04.276348 containerd[2012]: time="2025-01-17T12:02:04.276068925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:04.277278 containerd[2012]: time="2025-01-17T12:02:04.276897717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:04.287839 kubelet[2858]: W0117 12:02:04.287732 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-94&limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:04.288124 kubelet[2858]: E0117 12:02:04.287854 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-94&limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:04.322439 systemd[1]: Started cri-containerd-7010885b41d9bee0404b04ca845d426f5f61ab57a6a4543601da33d8022c5662.scope - libcontainer container 7010885b41d9bee0404b04ca845d426f5f61ab57a6a4543601da33d8022c5662. Jan 17 12:02:04.336404 systemd[1]: Started cri-containerd-286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541.scope - libcontainer container 286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541. Jan 17 12:02:04.342242 systemd[1]: Started cri-containerd-cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe.scope - libcontainer container cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe. Jan 17 12:02:04.390612 kubelet[2858]: W0117 12:02:04.390520 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:04.390612 kubelet[2858]: E0117 12:02:04.390622 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:04.399677 kubelet[2858]: E0117 12:02:04.399484 2858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-94?timeout=10s\": dial tcp 172.31.18.94:6443: connect: connection refused" interval="1.6s" Jan 17 12:02:04.423855 kubelet[2858]: W0117 12:02:04.423772 2858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.94:6443: connect: connection refused Jan 17 12:02:04.423855 kubelet[2858]: E0117 12:02:04.423851 2858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.94:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:02:04.428850 containerd[2012]: time="2025-01-17T12:02:04.428759062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-94,Uid:267d53862f2bcc5e04792d90fa6139c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7010885b41d9bee0404b04ca845d426f5f61ab57a6a4543601da33d8022c5662\"" Jan 17 12:02:04.437698 containerd[2012]: time="2025-01-17T12:02:04.437643046Z" level=info msg="CreateContainer within sandbox \"7010885b41d9bee0404b04ca845d426f5f61ab57a6a4543601da33d8022c5662\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:02:04.477050 containerd[2012]: time="2025-01-17T12:02:04.476984494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-94,Uid:e7838a7bde62acdbc823f2ad321b8508,Namespace:kube-system,Attempt:0,} returns sandbox id \"286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541\"" Jan 17 12:02:04.482338 containerd[2012]: time="2025-01-17T12:02:04.482274262Z" level=info msg="CreateContainer within sandbox \"7010885b41d9bee0404b04ca845d426f5f61ab57a6a4543601da33d8022c5662\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74a606fb3dcf9e7200efe9024c130c3d43d8cc5ecfbebe22504e82a5f13c9cb6\"" Jan 17 12:02:04.484068 containerd[2012]: time="2025-01-17T12:02:04.483963574Z" level=info msg="CreateContainer within sandbox \"286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:02:04.485010 containerd[2012]: time="2025-01-17T12:02:04.484493722Z" level=info msg="StartContainer for \"74a606fb3dcf9e7200efe9024c130c3d43d8cc5ecfbebe22504e82a5f13c9cb6\"" Jan 17 12:02:04.503215 containerd[2012]: time="2025-01-17T12:02:04.503151370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-94,Uid:f7c707d8de481cfeaf9f5555f4cc5ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe\"" Jan 17 12:02:04.510913 containerd[2012]: time="2025-01-17T12:02:04.510859378Z" level=info msg="CreateContainer within sandbox \"cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:02:04.511195 containerd[2012]: time="2025-01-17T12:02:04.510931798Z" level=info msg="CreateContainer within sandbox \"286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145\"" Jan 17 12:02:04.512096 containerd[2012]: time="2025-01-17T12:02:04.512046598Z" level=info msg="StartContainer for \"ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145\"" Jan 17 12:02:04.533206 containerd[2012]: time="2025-01-17T12:02:04.533085191Z" level=info msg="CreateContainer within sandbox \"cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167\"" Jan 17 12:02:04.534486 containerd[2012]: time="2025-01-17T12:02:04.534430967Z" level=info msg="StartContainer for \"f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167\"" Jan 17 12:02:04.547429 systemd[1]: Started cri-containerd-74a606fb3dcf9e7200efe9024c130c3d43d8cc5ecfbebe22504e82a5f13c9cb6.scope - libcontainer container 74a606fb3dcf9e7200efe9024c130c3d43d8cc5ecfbebe22504e82a5f13c9cb6. Jan 17 12:02:04.606509 systemd[1]: Started cri-containerd-ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145.scope - libcontainer container ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145. Jan 17 12:02:04.618291 kubelet[2858]: I0117 12:02:04.616880 2858 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-94" Jan 17 12:02:04.618291 kubelet[2858]: E0117 12:02:04.617368 2858 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.94:6443/api/v1/nodes\": dial tcp 172.31.18.94:6443: connect: connection refused" node="ip-172-31-18-94" Jan 17 12:02:04.629667 systemd[1]: Started cri-containerd-f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167.scope - libcontainer container f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167. Jan 17 12:02:04.676133 containerd[2012]: time="2025-01-17T12:02:04.675904859Z" level=info msg="StartContainer for \"74a606fb3dcf9e7200efe9024c130c3d43d8cc5ecfbebe22504e82a5f13c9cb6\" returns successfully" Jan 17 12:02:04.729037 containerd[2012]: time="2025-01-17T12:02:04.728957304Z" level=info msg="StartContainer for \"ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145\" returns successfully" Jan 17 12:02:04.784263 containerd[2012]: time="2025-01-17T12:02:04.783843324Z" level=info msg="StartContainer for \"f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167\" returns successfully" Jan 17 12:02:06.221185 kubelet[2858]: I0117 12:02:06.219818 2858 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-94" Jan 17 12:02:08.117251 update_engine[1999]: I20250117 12:02:08.117152 1999 update_attempter.cc:509] Updating boot flags... Jan 17 12:02:08.259255 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3144) Jan 17 12:02:08.724066 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3143) Jan 17 12:02:09.273533 kubelet[2858]: E0117 12:02:09.273474 2858 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-94\" not found" node="ip-172-31-18-94" Jan 17 12:02:09.406225 kubelet[2858]: I0117 12:02:09.406142 2858 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-94" Jan 17 12:02:09.972211 kubelet[2858]: I0117 12:02:09.971837 2858 apiserver.go:52] "Watching apiserver" Jan 17 12:02:09.994193 kubelet[2858]: I0117 12:02:09.993899 2858 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:02:11.536060 systemd[1]: Reloading requested from client PID 3314 ('systemctl') (unit session-9.scope)... Jan 17 12:02:11.536558 systemd[1]: Reloading... Jan 17 12:02:11.792238 zram_generator::config[3351]: No configuration found. Jan 17 12:02:12.117720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:12.348755 systemd[1]: Reloading finished in 811 ms. Jan 17 12:02:12.429873 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:12.445362 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:02:12.446066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:12.446170 systemd[1]: kubelet.service: Consumed 2.117s CPU time, 117.0M memory peak, 0B memory swap peak. Jan 17 12:02:12.455702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:12.786164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:12.806563 (kubelet)[3414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:12.925213 kubelet[3414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:12.925213 kubelet[3414]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:12.925213 kubelet[3414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:12.925213 kubelet[3414]: I0117 12:02:12.925050 3414 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:12.936749 kubelet[3414]: I0117 12:02:12.936694 3414 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:02:12.936749 kubelet[3414]: I0117 12:02:12.936742 3414 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:12.937332 kubelet[3414]: I0117 12:02:12.937236 3414 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:02:12.941674 kubelet[3414]: I0117 12:02:12.941375 3414 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:02:12.949474 kubelet[3414]: I0117 12:02:12.949423 3414 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:12.961755 kubelet[3414]: E0117 12:02:12.960079 3414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:02:12.961887 kubelet[3414]: I0117 12:02:12.961772 3414 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:02:12.967690 kubelet[3414]: I0117 12:02:12.967581 3414 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:12.967888 kubelet[3414]: I0117 12:02:12.967851 3414 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:02:12.968281 kubelet[3414]: I0117 12:02:12.968218 3414 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:12.969145 kubelet[3414]: I0117 12:02:12.968278 3414 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-94","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:02:12.969145 kubelet[3414]: I0117 12:02:12.968599 3414 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:12.969145 kubelet[3414]: I0117 12:02:12.968620 3414 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:02:12.969145 kubelet[3414]: I0117 12:02:12.968683 3414 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:12.969145 kubelet[3414]: I0117 12:02:12.968881 3414 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:02:12.970052 kubelet[3414]: I0117 12:02:12.968911 3414 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:12.970052 kubelet[3414]: I0117 12:02:12.970040 3414 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:02:12.970211 kubelet[3414]: I0117 12:02:12.970066 3414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:12.977497 kubelet[3414]: I0117 12:02:12.977442 3414 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:12.978879 kubelet[3414]: I0117 12:02:12.978271 3414 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:12.979030 kubelet[3414]: I0117 12:02:12.978961 3414 server.go:1269] "Started kubelet" Jan 17 12:02:12.992503 kubelet[3414]: I0117 12:02:12.992069 3414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:13.003341 kubelet[3414]: I0117 12:02:13.003273 3414 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:13.004932 kubelet[3414]: I0117 12:02:13.004879 3414 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:02:13.006615 kubelet[3414]: I0117 12:02:13.006529 3414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:13.006948 kubelet[3414]: I0117 12:02:13.006877 3414 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:13.009341 kubelet[3414]: I0117 12:02:13.009293 3414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:02:13.018134 kubelet[3414]: I0117 12:02:13.017717 3414 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:02:13.028570 kubelet[3414]: E0117 12:02:13.023493 3414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-94\" not found" Jan 17 12:02:13.028570 kubelet[3414]: I0117 12:02:13.024515 3414 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:02:13.028570 kubelet[3414]: I0117 12:02:13.024828 3414 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:02:13.050960 kubelet[3414]: I0117 12:02:13.050728 3414 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:13.056962 kubelet[3414]: I0117 12:02:13.056913 3414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:13.084582 kubelet[3414]: I0117 12:02:13.084515 3414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:13.093971 kubelet[3414]: I0117 12:02:13.092642 3414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:13.095226 kubelet[3414]: I0117 12:02:13.094307 3414 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:13.095226 kubelet[3414]: I0117 12:02:13.094378 3414 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:02:13.095226 kubelet[3414]: E0117 12:02:13.094460 3414 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:13.108246 kubelet[3414]: E0117 12:02:13.107719 3414 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:13.108490 kubelet[3414]: I0117 12:02:13.092887 3414 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:13.195272 kubelet[3414]: E0117 12:02:13.195226 3414 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:02:13.224345 kubelet[3414]: I0117 12:02:13.224311 3414 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:13.225339 kubelet[3414]: I0117 12:02:13.224677 3414 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:13.225339 kubelet[3414]: I0117 12:02:13.224763 3414 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:13.225339 kubelet[3414]: I0117 12:02:13.225149 3414 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:02:13.225339 kubelet[3414]: I0117 12:02:13.225172 3414 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:02:13.225339 kubelet[3414]: I0117 12:02:13.225210 3414 policy_none.go:49] "None policy: Start" Jan 17 12:02:13.227958 kubelet[3414]: I0117 12:02:13.227378 3414 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:13.227958 kubelet[3414]: I0117 12:02:13.227424 3414 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:13.227958 kubelet[3414]: I0117 12:02:13.227707 3414 state_mem.go:75] "Updated machine memory state" Jan 17 12:02:13.238555 kubelet[3414]: I0117 12:02:13.238519 3414 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:13.243782 kubelet[3414]: I0117 12:02:13.243748 3414 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:02:13.244813 kubelet[3414]: I0117 12:02:13.244019 3414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:02:13.244813 kubelet[3414]: I0117 12:02:13.244595 3414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:13.359338 kubelet[3414]: I0117 12:02:13.359184 3414 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-94" Jan 17 12:02:13.371427 kubelet[3414]: I0117 12:02:13.371361 3414 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-94" Jan 17 12:02:13.371579 kubelet[3414]: I0117 12:02:13.371492 3414 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-94" Jan 17 12:02:13.410336 kubelet[3414]: E0117 12:02:13.410295 3414 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-94\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:13.426770 kubelet[3414]: I0117 12:02:13.426704 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:13.426770 kubelet[3414]: I0117 12:02:13.426764 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:13.426978 kubelet[3414]: I0117 12:02:13.426804 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:13.426978 kubelet[3414]: I0117 12:02:13.426839 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:13.426978 kubelet[3414]: I0117 12:02:13.426883 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c707d8de481cfeaf9f5555f4cc5ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-94\" (UID: \"f7c707d8de481cfeaf9f5555f4cc5ba2\") " pod="kube-system/kube-controller-manager-ip-172-31-18-94" Jan 17 12:02:13.426978 kubelet[3414]: I0117 12:02:13.426921 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7838a7bde62acdbc823f2ad321b8508-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-94\" (UID: \"e7838a7bde62acdbc823f2ad321b8508\") " pod="kube-system/kube-scheduler-ip-172-31-18-94" Jan 17 12:02:13.427227 kubelet[3414]: I0117 12:02:13.426978 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/267d53862f2bcc5e04792d90fa6139c9-ca-certs\") pod \"kube-apiserver-ip-172-31-18-94\" (UID: \"267d53862f2bcc5e04792d90fa6139c9\") " pod="kube-system/kube-apiserver-ip-172-31-18-94" Jan 17 12:02:13.427227 kubelet[3414]: I0117 12:02:13.427029 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/267d53862f2bcc5e04792d90fa6139c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-94\" (UID: \"267d53862f2bcc5e04792d90fa6139c9\") " pod="kube-system/kube-apiserver-ip-172-31-18-94" Jan 17 12:02:13.427227 kubelet[3414]: I0117 12:02:13.427070 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/267d53862f2bcc5e04792d90fa6139c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-94\" (UID: \"267d53862f2bcc5e04792d90fa6139c9\") " pod="kube-system/kube-apiserver-ip-172-31-18-94" Jan 17 12:02:13.974548 kubelet[3414]: I0117 12:02:13.974431 3414 apiserver.go:52] "Watching apiserver" Jan 17 12:02:14.026596 kubelet[3414]: I0117 12:02:14.025433 3414 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:02:14.365709 kubelet[3414]: I0117 12:02:14.365075 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-94" podStartSLOduration=5.365051983 podStartE2EDuration="5.365051983s" podCreationTimestamp="2025-01-17 12:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:14.308015659 +0000 UTC m=+1.491464204" watchObservedRunningTime="2025-01-17 12:02:14.365051983 +0000 UTC m=+1.548500516" Jan 17 12:02:14.391356 kubelet[3414]: I0117 12:02:14.390935 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-94" podStartSLOduration=1.390912656 podStartE2EDuration="1.390912656s" podCreationTimestamp="2025-01-17 12:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:14.389585228 +0000 UTC m=+1.573033761" watchObservedRunningTime="2025-01-17 12:02:14.390912656 +0000 UTC m=+1.574361189" Jan 17 12:02:14.391356 kubelet[3414]: I0117 12:02:14.391184 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-94" podStartSLOduration=1.391174028 podStartE2EDuration="1.391174028s" podCreationTimestamp="2025-01-17 12:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:14.365901019 +0000 UTC m=+1.549349552" watchObservedRunningTime="2025-01-17 12:02:14.391174028 +0000 UTC m=+1.574622549" Jan 17 12:02:16.143203 kubelet[3414]: I0117 12:02:16.143150 3414 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:02:16.147050 containerd[2012]: time="2025-01-17T12:02:16.145792172Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:02:16.147696 kubelet[3414]: I0117 12:02:16.146168 3414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:02:17.152224 kubelet[3414]: I0117 12:02:17.151787 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be3399e2-e8c7-4c2b-82f4-5358c568cc8c-kube-proxy\") pod \"kube-proxy-2x5t6\" (UID: \"be3399e2-e8c7-4c2b-82f4-5358c568cc8c\") " pod="kube-system/kube-proxy-2x5t6" Jan 17 12:02:17.153327 kubelet[3414]: I0117 12:02:17.152428 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be3399e2-e8c7-4c2b-82f4-5358c568cc8c-xtables-lock\") pod \"kube-proxy-2x5t6\" (UID: \"be3399e2-e8c7-4c2b-82f4-5358c568cc8c\") " pod="kube-system/kube-proxy-2x5t6" Jan 17 12:02:17.153327 kubelet[3414]: I0117 12:02:17.152475 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be3399e2-e8c7-4c2b-82f4-5358c568cc8c-lib-modules\") pod \"kube-proxy-2x5t6\" (UID: \"be3399e2-e8c7-4c2b-82f4-5358c568cc8c\") " pod="kube-system/kube-proxy-2x5t6" Jan 17 12:02:17.153327 kubelet[3414]: I0117 12:02:17.152527 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwts\" (UniqueName: \"kubernetes.io/projected/be3399e2-e8c7-4c2b-82f4-5358c568cc8c-kube-api-access-mzwts\") pod \"kube-proxy-2x5t6\" (UID: \"be3399e2-e8c7-4c2b-82f4-5358c568cc8c\") " pod="kube-system/kube-proxy-2x5t6" Jan 17 12:02:17.156622 systemd[1]: Created slice kubepods-besteffort-podbe3399e2_e8c7_4c2b_82f4_5358c568cc8c.slice - libcontainer container kubepods-besteffort-podbe3399e2_e8c7_4c2b_82f4_5358c568cc8c.slice. Jan 17 12:02:17.194152 kubelet[3414]: W0117 12:02:17.193631 3414 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-94' and this object Jan 17 12:02:17.194152 kubelet[3414]: E0117 12:02:17.193721 3414 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-18-94\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-94' and this object" logger="UnhandledError" Jan 17 12:02:17.197135 kubelet[3414]: W0117 12:02:17.196219 3414 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-94' and this object Jan 17 12:02:17.197488 kubelet[3414]: E0117 12:02:17.197390 3414 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-18-94\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-94' and this object" logger="UnhandledError" Jan 17 12:02:17.545080 systemd[1]: Created slice kubepods-besteffort-podc54ba6ce_0054_4e87_83cc_2e0d40e85fc5.slice - libcontainer container kubepods-besteffort-podc54ba6ce_0054_4e87_83cc_2e0d40e85fc5.slice. Jan 17 12:02:17.555165 kubelet[3414]: I0117 12:02:17.554890 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c54ba6ce-0054-4e87-83cc-2e0d40e85fc5-var-lib-calico\") pod \"tigera-operator-76c4976dd7-t6kcs\" (UID: \"c54ba6ce-0054-4e87-83cc-2e0d40e85fc5\") " pod="tigera-operator/tigera-operator-76c4976dd7-t6kcs" Jan 17 12:02:17.555165 kubelet[3414]: I0117 12:02:17.555002 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75r9\" (UniqueName: \"kubernetes.io/projected/c54ba6ce-0054-4e87-83cc-2e0d40e85fc5-kube-api-access-r75r9\") pod \"tigera-operator-76c4976dd7-t6kcs\" (UID: \"c54ba6ce-0054-4e87-83cc-2e0d40e85fc5\") " pod="tigera-operator/tigera-operator-76c4976dd7-t6kcs" Jan 17 12:02:17.855466 containerd[2012]: time="2025-01-17T12:02:17.854667193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-t6kcs,Uid:c54ba6ce-0054-4e87-83cc-2e0d40e85fc5,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:02:17.922665 containerd[2012]: time="2025-01-17T12:02:17.921452605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:17.922665 containerd[2012]: time="2025-01-17T12:02:17.922319425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:17.922665 containerd[2012]: time="2025-01-17T12:02:17.922400257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:17.929233 containerd[2012]: time="2025-01-17T12:02:17.927405469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:18.040485 systemd[1]: Started cri-containerd-f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d.scope - libcontainer container f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d. Jan 17 12:02:18.169277 containerd[2012]: time="2025-01-17T12:02:18.168232294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-t6kcs,Uid:c54ba6ce-0054-4e87-83cc-2e0d40e85fc5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d\"" Jan 17 12:02:18.172590 containerd[2012]: time="2025-01-17T12:02:18.172022758Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:02:18.255536 kubelet[3414]: E0117 12:02:18.255399 3414 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:02:18.266140 kubelet[3414]: E0117 12:02:18.258926 3414 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be3399e2-e8c7-4c2b-82f4-5358c568cc8c-kube-proxy podName:be3399e2-e8c7-4c2b-82f4-5358c568cc8c nodeName:}" failed. No retries permitted until 2025-01-17 12:02:18.758887315 +0000 UTC m=+5.942335824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/be3399e2-e8c7-4c2b-82f4-5358c568cc8c-kube-proxy") pod "kube-proxy-2x5t6" (UID: "be3399e2-e8c7-4c2b-82f4-5358c568cc8c") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:02:18.865669 sudo[2367]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:18.890443 sshd[2364]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:18.899265 systemd[1]: sshd@8-172.31.18.94:22-139.178.68.195:55132.service: Deactivated successfully. Jan 17 12:02:18.902646 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:02:18.903024 systemd[1]: session-9.scope: Consumed 9.732s CPU time, 151.6M memory peak, 0B memory swap peak. Jan 17 12:02:18.904663 systemd-logind[1998]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:02:18.907641 systemd-logind[1998]: Removed session 9. Jan 17 12:02:18.971547 containerd[2012]: time="2025-01-17T12:02:18.971397350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2x5t6,Uid:be3399e2-e8c7-4c2b-82f4-5358c568cc8c,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:19.008734 containerd[2012]: time="2025-01-17T12:02:19.008373442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:19.008734 containerd[2012]: time="2025-01-17T12:02:19.008460178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:19.008734 containerd[2012]: time="2025-01-17T12:02:19.008485390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:19.008734 containerd[2012]: time="2025-01-17T12:02:19.008668618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:19.048442 systemd[1]: Started cri-containerd-925850e90f8531effd43fd214cfef9603954b36a7e3d1761b2c8674da1313453.scope - libcontainer container 925850e90f8531effd43fd214cfef9603954b36a7e3d1761b2c8674da1313453. Jan 17 12:02:19.090835 containerd[2012]: time="2025-01-17T12:02:19.090777611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2x5t6,Uid:be3399e2-e8c7-4c2b-82f4-5358c568cc8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"925850e90f8531effd43fd214cfef9603954b36a7e3d1761b2c8674da1313453\"" Jan 17 12:02:19.100069 containerd[2012]: time="2025-01-17T12:02:19.099981179Z" level=info msg="CreateContainer within sandbox \"925850e90f8531effd43fd214cfef9603954b36a7e3d1761b2c8674da1313453\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:02:19.123681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135319588.mount: Deactivated successfully. Jan 17 12:02:19.126971 containerd[2012]: time="2025-01-17T12:02:19.126067463Z" level=info msg="CreateContainer within sandbox \"925850e90f8531effd43fd214cfef9603954b36a7e3d1761b2c8674da1313453\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"572e80770535bbc44e5f031d5520d038f4a8652fdb3d7b1e652c96003b629b13\"" Jan 17 12:02:19.128478 containerd[2012]: time="2025-01-17T12:02:19.128225975Z" level=info msg="StartContainer for \"572e80770535bbc44e5f031d5520d038f4a8652fdb3d7b1e652c96003b629b13\"" Jan 17 12:02:19.177207 systemd[1]: Started cri-containerd-572e80770535bbc44e5f031d5520d038f4a8652fdb3d7b1e652c96003b629b13.scope - libcontainer container 572e80770535bbc44e5f031d5520d038f4a8652fdb3d7b1e652c96003b629b13. Jan 17 12:02:19.251016 containerd[2012]: time="2025-01-17T12:02:19.250932012Z" level=info msg="StartContainer for \"572e80770535bbc44e5f031d5520d038f4a8652fdb3d7b1e652c96003b629b13\" returns successfully" Jan 17 12:02:22.912132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485081939.mount: Deactivated successfully. Jan 17 12:02:23.124302 kubelet[3414]: I0117 12:02:23.123550 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2x5t6" podStartSLOduration=7.123523719 podStartE2EDuration="7.123523719s" podCreationTimestamp="2025-01-17 12:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:20.217300056 +0000 UTC m=+7.400748625" watchObservedRunningTime="2025-01-17 12:02:23.123523719 +0000 UTC m=+10.306972264" Jan 17 12:02:23.579857 containerd[2012]: time="2025-01-17T12:02:23.579775229Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:23.581698 containerd[2012]: time="2025-01-17T12:02:23.581599037Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125948" Jan 17 12:02:23.582434 containerd[2012]: time="2025-01-17T12:02:23.582346649Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:23.586822 containerd[2012]: time="2025-01-17T12:02:23.586735385Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:23.588714 containerd[2012]: time="2025-01-17T12:02:23.588546089Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 5.416457691s" Jan 17 12:02:23.588942 containerd[2012]: time="2025-01-17T12:02:23.588608429Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:02:23.593077 containerd[2012]: time="2025-01-17T12:02:23.592841765Z" level=info msg="CreateContainer within sandbox \"f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:02:23.620488 containerd[2012]: time="2025-01-17T12:02:23.620413217Z" level=info msg="CreateContainer within sandbox \"f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65\"" Jan 17 12:02:23.622488 containerd[2012]: time="2025-01-17T12:02:23.621373361Z" level=info msg="StartContainer for \"03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65\"" Jan 17 12:02:23.672437 systemd[1]: Started cri-containerd-03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65.scope - libcontainer container 03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65. Jan 17 12:02:23.716077 containerd[2012]: time="2025-01-17T12:02:23.715930254Z" level=info msg="StartContainer for \"03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65\" returns successfully" Jan 17 12:02:29.079291 kubelet[3414]: I0117 12:02:29.079159 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-t6kcs" podStartSLOduration=6.660312245 podStartE2EDuration="12.079095548s" podCreationTimestamp="2025-01-17 12:02:17 +0000 UTC" firstStartedPulling="2025-01-17 12:02:18.171447862 +0000 UTC m=+5.354896383" lastFinishedPulling="2025-01-17 12:02:23.590231177 +0000 UTC m=+10.773679686" observedRunningTime="2025-01-17 12:02:24.2281513 +0000 UTC m=+11.411599857" watchObservedRunningTime="2025-01-17 12:02:29.079095548 +0000 UTC m=+16.262544081" Jan 17 12:02:29.097973 systemd[1]: Created slice kubepods-besteffort-pod5b81bbcf_5d04_46fd_b144_0bcb7c79e164.slice - libcontainer container kubepods-besteffort-pod5b81bbcf_5d04_46fd_b144_0bcb7c79e164.slice. Jan 17 12:02:29.126437 kubelet[3414]: I0117 12:02:29.126370 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-tigera-ca-bundle\") pod \"calico-typha-84677f87bd-7lz8b\" (UID: \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\") " pod="calico-system/calico-typha-84677f87bd-7lz8b" Jan 17 12:02:29.126641 kubelet[3414]: I0117 12:02:29.126454 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-typha-certs\") pod \"calico-typha-84677f87bd-7lz8b\" (UID: \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\") " pod="calico-system/calico-typha-84677f87bd-7lz8b" Jan 17 12:02:29.227179 kubelet[3414]: I0117 12:02:29.227097 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7b7g\" (UniqueName: \"kubernetes.io/projected/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-kube-api-access-z7b7g\") pod \"calico-typha-84677f87bd-7lz8b\" (UID: \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\") " pod="calico-system/calico-typha-84677f87bd-7lz8b" Jan 17 12:02:29.302902 systemd[1]: Created slice kubepods-besteffort-pod1e033f8f_29ae_4b6f_bde0_0458f4589e6b.slice - libcontainer container kubepods-besteffort-pod1e033f8f_29ae_4b6f_bde0_0458f4589e6b.slice. Jan 17 12:02:29.416776 containerd[2012]: time="2025-01-17T12:02:29.415757590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84677f87bd-7lz8b,Uid:5b81bbcf-5d04-46fd-b144-0bcb7c79e164,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:29.429764 kubelet[3414]: I0117 12:02:29.429010 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-policysync\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.429764 kubelet[3414]: I0117 12:02:29.429085 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-flexvol-driver-host\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.429764 kubelet[3414]: I0117 12:02:29.429298 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-bin-dir\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.429764 kubelet[3414]: I0117 12:02:29.429342 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-net-dir\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.429764 kubelet[3414]: I0117 12:02:29.429431 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-run-calico\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431443 kubelet[3414]: I0117 12:02:29.429519 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-tigera-ca-bundle\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431443 kubelet[3414]: I0117 12:02:29.429600 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-lib-calico\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431443 kubelet[3414]: I0117 12:02:29.429728 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-lib-modules\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431443 kubelet[3414]: I0117 12:02:29.429782 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69d7\" (UniqueName: \"kubernetes.io/projected/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-kube-api-access-x69d7\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431443 kubelet[3414]: I0117 12:02:29.429828 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-xtables-lock\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431764 kubelet[3414]: I0117 12:02:29.429864 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-log-dir\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.431764 kubelet[3414]: I0117 12:02:29.429910 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-node-certs\") pod \"calico-node-4m9hc\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " pod="calico-system/calico-node-4m9hc" Jan 17 12:02:29.481143 containerd[2012]: time="2025-01-17T12:02:29.479826742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.481790 containerd[2012]: time="2025-01-17T12:02:29.480643078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.482647 containerd[2012]: time="2025-01-17T12:02:29.482004742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.483706 containerd[2012]: time="2025-01-17T12:02:29.483330226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.512169 kubelet[3414]: E0117 12:02:29.512077 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:29.547468 systemd[1]: Started cri-containerd-51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5.scope - libcontainer container 51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5. Jan 17 12:02:29.553857 kubelet[3414]: E0117 12:02:29.553781 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.554805 kubelet[3414]: W0117 12:02:29.554370 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.554805 kubelet[3414]: E0117 12:02:29.554457 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.557409 kubelet[3414]: E0117 12:02:29.557146 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.557409 kubelet[3414]: W0117 12:02:29.557182 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.557409 kubelet[3414]: E0117 12:02:29.557288 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.558979 kubelet[3414]: E0117 12:02:29.558711 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.558979 kubelet[3414]: W0117 12:02:29.558748 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.558979 kubelet[3414]: E0117 12:02:29.558822 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.560219 kubelet[3414]: E0117 12:02:29.559805 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.560219 kubelet[3414]: W0117 12:02:29.559858 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.560219 kubelet[3414]: E0117 12:02:29.559898 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.562316 kubelet[3414]: E0117 12:02:29.561052 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.562316 kubelet[3414]: W0117 12:02:29.561085 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.562316 kubelet[3414]: E0117 12:02:29.561179 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.633850 kubelet[3414]: E0117 12:02:29.633797 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.633850 kubelet[3414]: W0117 12:02:29.633834 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.634216 kubelet[3414]: E0117 12:02:29.633866 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.634216 kubelet[3414]: I0117 12:02:29.633916 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29853ccc-52db-41c3-8f83-89ba39b0f309-socket-dir\") pod \"csi-node-driver-7t526\" (UID: \"29853ccc-52db-41c3-8f83-89ba39b0f309\") " pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:29.635249 kubelet[3414]: E0117 12:02:29.634938 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.635249 kubelet[3414]: W0117 12:02:29.634971 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.636773 kubelet[3414]: E0117 12:02:29.636664 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.636773 kubelet[3414]: I0117 12:02:29.636730 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29853ccc-52db-41c3-8f83-89ba39b0f309-registration-dir\") pod \"csi-node-driver-7t526\" (UID: \"29853ccc-52db-41c3-8f83-89ba39b0f309\") " pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:29.638207 kubelet[3414]: E0117 12:02:29.637064 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.638207 kubelet[3414]: W0117 12:02:29.637086 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.638207 kubelet[3414]: E0117 12:02:29.637152 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.638562 kubelet[3414]: E0117 12:02:29.638240 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.638562 kubelet[3414]: W0117 12:02:29.638267 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.638562 kubelet[3414]: E0117 12:02:29.638297 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.638562 kubelet[3414]: I0117 12:02:29.638346 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29853ccc-52db-41c3-8f83-89ba39b0f309-kubelet-dir\") pod \"csi-node-driver-7t526\" (UID: \"29853ccc-52db-41c3-8f83-89ba39b0f309\") " pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:29.640674 kubelet[3414]: E0117 12:02:29.640615 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.640674 kubelet[3414]: W0117 12:02:29.640661 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.642138 kubelet[3414]: E0117 12:02:29.642065 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.643209 kubelet[3414]: W0117 12:02:29.642149 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.643209 kubelet[3414]: E0117 12:02:29.642781 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.643209 kubelet[3414]: I0117 12:02:29.642838 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/29853ccc-52db-41c3-8f83-89ba39b0f309-varrun\") pod \"csi-node-driver-7t526\" (UID: \"29853ccc-52db-41c3-8f83-89ba39b0f309\") " pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:29.643209 kubelet[3414]: E0117 12:02:29.642889 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.643681 kubelet[3414]: E0117 12:02:29.643545 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.643681 kubelet[3414]: W0117 12:02:29.643570 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.643885 kubelet[3414]: E0117 12:02:29.643842 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.644301 kubelet[3414]: E0117 12:02:29.644252 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.644301 kubelet[3414]: W0117 12:02:29.644282 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.645067 kubelet[3414]: E0117 12:02:29.644871 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.646335 kubelet[3414]: E0117 12:02:29.645264 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.646335 kubelet[3414]: W0117 12:02:29.645296 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.646335 kubelet[3414]: E0117 12:02:29.645411 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.646335 kubelet[3414]: I0117 12:02:29.645456 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77g2\" (UniqueName: \"kubernetes.io/projected/29853ccc-52db-41c3-8f83-89ba39b0f309-kube-api-access-t77g2\") pod \"csi-node-driver-7t526\" (UID: \"29853ccc-52db-41c3-8f83-89ba39b0f309\") " pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:29.647082 kubelet[3414]: E0117 12:02:29.647033 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.647082 kubelet[3414]: W0117 12:02:29.647080 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.647275 kubelet[3414]: E0117 12:02:29.647245 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.647674 kubelet[3414]: E0117 12:02:29.647636 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.647674 kubelet[3414]: W0117 12:02:29.647666 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.647840 kubelet[3414]: E0117 12:02:29.647694 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.649864 kubelet[3414]: E0117 12:02:29.649792 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.649864 kubelet[3414]: W0117 12:02:29.649835 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.650049 kubelet[3414]: E0117 12:02:29.649880 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.652344 kubelet[3414]: E0117 12:02:29.652286 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.652344 kubelet[3414]: W0117 12:02:29.652331 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.652585 kubelet[3414]: E0117 12:02:29.652366 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.653163 kubelet[3414]: E0117 12:02:29.652872 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.653163 kubelet[3414]: W0117 12:02:29.652901 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.653163 kubelet[3414]: E0117 12:02:29.652927 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.653457 kubelet[3414]: E0117 12:02:29.653418 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.653457 kubelet[3414]: W0117 12:02:29.653447 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.653589 kubelet[3414]: E0117 12:02:29.653473 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.653895 kubelet[3414]: E0117 12:02:29.653857 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.653895 kubelet[3414]: W0117 12:02:29.653888 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.654023 kubelet[3414]: E0117 12:02:29.653913 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.756145 kubelet[3414]: E0117 12:02:29.755683 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.756145 kubelet[3414]: W0117 12:02:29.755843 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.756145 kubelet[3414]: E0117 12:02:29.755891 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.758495 kubelet[3414]: E0117 12:02:29.758441 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.758495 kubelet[3414]: W0117 12:02:29.758482 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.758725 kubelet[3414]: E0117 12:02:29.758540 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.761731 kubelet[3414]: E0117 12:02:29.761672 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.761731 kubelet[3414]: W0117 12:02:29.761717 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.761985 kubelet[3414]: E0117 12:02:29.761763 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.762807 kubelet[3414]: E0117 12:02:29.762759 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.762807 kubelet[3414]: W0117 12:02:29.762795 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.763142 kubelet[3414]: E0117 12:02:29.763080 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.766052 kubelet[3414]: E0117 12:02:29.764256 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.766052 kubelet[3414]: W0117 12:02:29.764294 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.766052 kubelet[3414]: E0117 12:02:29.764539 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.767846 kubelet[3414]: E0117 12:02:29.767771 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.767846 kubelet[3414]: W0117 12:02:29.767824 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.768156 kubelet[3414]: E0117 12:02:29.768067 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.768456 kubelet[3414]: E0117 12:02:29.768419 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.768456 kubelet[3414]: W0117 12:02:29.768448 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.768771 kubelet[3414]: E0117 12:02:29.768644 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.768849 kubelet[3414]: E0117 12:02:29.768821 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.768849 kubelet[3414]: W0117 12:02:29.768837 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.769209 kubelet[3414]: E0117 12:02:29.769143 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.770350 kubelet[3414]: E0117 12:02:29.770291 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.770350 kubelet[3414]: W0117 12:02:29.770336 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.770629 kubelet[3414]: E0117 12:02:29.770459 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.771036 kubelet[3414]: E0117 12:02:29.770968 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.771036 kubelet[3414]: W0117 12:02:29.771008 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.771203 kubelet[3414]: E0117 12:02:29.771170 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.771680 kubelet[3414]: E0117 12:02:29.771640 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.771680 kubelet[3414]: W0117 12:02:29.771672 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.771836 kubelet[3414]: E0117 12:02:29.771785 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.772662 kubelet[3414]: E0117 12:02:29.772361 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.773304 kubelet[3414]: W0117 12:02:29.773246 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.773444 kubelet[3414]: E0117 12:02:29.773400 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.773974 kubelet[3414]: E0117 12:02:29.773930 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.773974 kubelet[3414]: W0117 12:02:29.773965 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.774328 kubelet[3414]: E0117 12:02:29.774183 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.774398 kubelet[3414]: E0117 12:02:29.774380 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.774454 kubelet[3414]: W0117 12:02:29.774396 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.774615 kubelet[3414]: E0117 12:02:29.774564 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.775463 kubelet[3414]: E0117 12:02:29.775409 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.775463 kubelet[3414]: W0117 12:02:29.775449 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.776595 kubelet[3414]: E0117 12:02:29.775655 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.776595 kubelet[3414]: E0117 12:02:29.775836 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.776595 kubelet[3414]: W0117 12:02:29.775852 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.776595 kubelet[3414]: E0117 12:02:29.776533 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.776908 kubelet[3414]: E0117 12:02:29.776861 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.776908 kubelet[3414]: W0117 12:02:29.776891 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.778189 kubelet[3414]: E0117 12:02:29.777328 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.778515 kubelet[3414]: E0117 12:02:29.778322 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.778515 kubelet[3414]: W0117 12:02:29.778346 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.778515 kubelet[3414]: E0117 12:02:29.778458 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.780025 kubelet[3414]: E0117 12:02:29.779971 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.780025 kubelet[3414]: W0117 12:02:29.780011 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.780648 kubelet[3414]: E0117 12:02:29.780596 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.782168 kubelet[3414]: E0117 12:02:29.781883 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.782168 kubelet[3414]: W0117 12:02:29.781919 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.782168 kubelet[3414]: E0117 12:02:29.781982 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.783155 kubelet[3414]: E0117 12:02:29.783093 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.783506 kubelet[3414]: W0117 12:02:29.783269 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.783506 kubelet[3414]: E0117 12:02:29.783342 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.784040 kubelet[3414]: E0117 12:02:29.783858 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.785130 kubelet[3414]: W0117 12:02:29.784160 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.785341 kubelet[3414]: E0117 12:02:29.785297 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.786235 kubelet[3414]: E0117 12:02:29.785897 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.786235 kubelet[3414]: W0117 12:02:29.785925 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.786235 kubelet[3414]: E0117 12:02:29.785985 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.786582 kubelet[3414]: E0117 12:02:29.786559 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.786906 kubelet[3414]: W0117 12:02:29.786668 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.787055 kubelet[3414]: E0117 12:02:29.787029 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.790169 kubelet[3414]: E0117 12:02:29.788502 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.790169 kubelet[3414]: W0117 12:02:29.788541 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.790169 kubelet[3414]: E0117 12:02:29.788575 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.815892 kubelet[3414]: E0117 12:02:29.815840 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:29.815892 kubelet[3414]: W0117 12:02:29.815878 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:29.816129 kubelet[3414]: E0117 12:02:29.815913 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:29.869166 containerd[2012]: time="2025-01-17T12:02:29.868846524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84677f87bd-7lz8b,Uid:5b81bbcf-5d04-46fd-b144-0bcb7c79e164,Namespace:calico-system,Attempt:0,} returns sandbox id \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\"" Jan 17 12:02:29.874834 containerd[2012]: time="2025-01-17T12:02:29.874238124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:02:29.912236 containerd[2012]: time="2025-01-17T12:02:29.912059029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4m9hc,Uid:1e033f8f-29ae-4b6f-bde0-0458f4589e6b,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:29.955137 containerd[2012]: time="2025-01-17T12:02:29.954910945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.955137 containerd[2012]: time="2025-01-17T12:02:29.955013821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.955137 containerd[2012]: time="2025-01-17T12:02:29.955051837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.955459 containerd[2012]: time="2025-01-17T12:02:29.955278265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.986487 systemd[1]: Started cri-containerd-8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403.scope - libcontainer container 8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403. Jan 17 12:02:30.062805 containerd[2012]: time="2025-01-17T12:02:30.062740173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4m9hc,Uid:1e033f8f-29ae-4b6f-bde0-0458f4589e6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\"" Jan 17 12:02:31.099313 kubelet[3414]: E0117 12:02:31.099095 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:31.194680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223653999.mount: Deactivated successfully. Jan 17 12:02:32.197454 containerd[2012]: time="2025-01-17T12:02:32.197397228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:32.199515 containerd[2012]: time="2025-01-17T12:02:32.199456068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 17 12:02:32.202137 containerd[2012]: time="2025-01-17T12:02:32.200488452Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:32.205194 containerd[2012]: time="2025-01-17T12:02:32.205097616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:32.206769 containerd[2012]: time="2025-01-17T12:02:32.206577168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.332264728s" Jan 17 12:02:32.206769 containerd[2012]: time="2025-01-17T12:02:32.206629224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:02:32.209419 containerd[2012]: time="2025-01-17T12:02:32.209366388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:02:32.243581 containerd[2012]: time="2025-01-17T12:02:32.240864384Z" level=info msg="CreateContainer within sandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:02:32.278013 containerd[2012]: time="2025-01-17T12:02:32.276904104Z" level=info msg="CreateContainer within sandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\"" Jan 17 12:02:32.278778 containerd[2012]: time="2025-01-17T12:02:32.278724192Z" level=info msg="StartContainer for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\"" Jan 17 12:02:32.343495 systemd[1]: Started cri-containerd-be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483.scope - libcontainer container be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483. Jan 17 12:02:32.444217 containerd[2012]: time="2025-01-17T12:02:32.444039109Z" level=info msg="StartContainer for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" returns successfully" Jan 17 12:02:33.097534 kubelet[3414]: E0117 12:02:33.097027 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:33.221257 systemd[1]: run-containerd-runc-k8s.io-be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483-runc.Go9Tmj.mount: Deactivated successfully. Jan 17 12:02:33.266497 kubelet[3414]: E0117 12:02:33.266446 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.266667 kubelet[3414]: W0117 12:02:33.266487 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.266667 kubelet[3414]: E0117 12:02:33.266634 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.268070 kubelet[3414]: E0117 12:02:33.267839 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.268070 kubelet[3414]: W0117 12:02:33.267871 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.268070 kubelet[3414]: E0117 12:02:33.267901 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.268965 kubelet[3414]: E0117 12:02:33.268646 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.268965 kubelet[3414]: W0117 12:02:33.268673 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.268965 kubelet[3414]: E0117 12:02:33.268699 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.269806 kubelet[3414]: E0117 12:02:33.269535 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.269806 kubelet[3414]: W0117 12:02:33.269581 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.269806 kubelet[3414]: E0117 12:02:33.269609 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.271002 kubelet[3414]: E0117 12:02:33.270845 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.271002 kubelet[3414]: W0117 12:02:33.270878 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.271002 kubelet[3414]: E0117 12:02:33.270909 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.271905 kubelet[3414]: E0117 12:02:33.271678 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.271905 kubelet[3414]: W0117 12:02:33.271710 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.271905 kubelet[3414]: E0117 12:02:33.271739 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.272435 kubelet[3414]: E0117 12:02:33.272395 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.272719 kubelet[3414]: W0117 12:02:33.272585 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.272719 kubelet[3414]: E0117 12:02:33.272622 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.273464 kubelet[3414]: E0117 12:02:33.273248 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.273464 kubelet[3414]: W0117 12:02:33.273276 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.273464 kubelet[3414]: E0117 12:02:33.273306 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.274777 kubelet[3414]: E0117 12:02:33.274609 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.274777 kubelet[3414]: W0117 12:02:33.274644 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.274777 kubelet[3414]: E0117 12:02:33.274675 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.275951 kubelet[3414]: E0117 12:02:33.275695 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.275951 kubelet[3414]: W0117 12:02:33.275748 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.275951 kubelet[3414]: E0117 12:02:33.275779 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.277023 kubelet[3414]: E0117 12:02:33.276863 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.277023 kubelet[3414]: W0117 12:02:33.276895 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.277023 kubelet[3414]: E0117 12:02:33.276925 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.278341 kubelet[3414]: E0117 12:02:33.277819 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.278341 kubelet[3414]: W0117 12:02:33.277863 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.278341 kubelet[3414]: E0117 12:02:33.277893 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.279563 kubelet[3414]: E0117 12:02:33.279271 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.279563 kubelet[3414]: W0117 12:02:33.279335 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.279563 kubelet[3414]: E0117 12:02:33.279367 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.280555 kubelet[3414]: E0117 12:02:33.280523 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.280775 kubelet[3414]: W0117 12:02:33.280623 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.280775 kubelet[3414]: E0117 12:02:33.280655 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.281965 kubelet[3414]: E0117 12:02:33.281611 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.281965 kubelet[3414]: W0117 12:02:33.281642 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.281965 kubelet[3414]: E0117 12:02:33.281800 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.303338 kubelet[3414]: E0117 12:02:33.303286 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.303524 kubelet[3414]: W0117 12:02:33.303352 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.303524 kubelet[3414]: E0117 12:02:33.303388 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.304599 kubelet[3414]: E0117 12:02:33.304562 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.304746 kubelet[3414]: W0117 12:02:33.304676 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.304952 kubelet[3414]: E0117 12:02:33.304812 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.307666 kubelet[3414]: E0117 12:02:33.307468 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.307666 kubelet[3414]: W0117 12:02:33.307505 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.307666 kubelet[3414]: E0117 12:02:33.307552 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.308475 kubelet[3414]: E0117 12:02:33.308266 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.308475 kubelet[3414]: W0117 12:02:33.308293 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.308475 kubelet[3414]: E0117 12:02:33.308354 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.309059 kubelet[3414]: E0117 12:02:33.308827 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.309059 kubelet[3414]: W0117 12:02:33.308848 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.309059 kubelet[3414]: E0117 12:02:33.308909 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.309680 kubelet[3414]: E0117 12:02:33.309653 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.309936 kubelet[3414]: W0117 12:02:33.309871 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.310579 kubelet[3414]: E0117 12:02:33.310325 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.311214 kubelet[3414]: E0117 12:02:33.311013 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.311214 kubelet[3414]: W0117 12:02:33.311040 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.311461 kubelet[3414]: E0117 12:02:33.311368 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.312087 kubelet[3414]: E0117 12:02:33.311899 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.312087 kubelet[3414]: W0117 12:02:33.311925 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.312576 kubelet[3414]: E0117 12:02:33.312448 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.312820 kubelet[3414]: E0117 12:02:33.312697 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.312820 kubelet[3414]: W0117 12:02:33.312721 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.312820 kubelet[3414]: E0117 12:02:33.312793 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.315507 kubelet[3414]: E0117 12:02:33.314984 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.315507 kubelet[3414]: W0117 12:02:33.315017 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.315507 kubelet[3414]: E0117 12:02:33.315233 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.318779 kubelet[3414]: E0117 12:02:33.317604 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.318779 kubelet[3414]: W0117 12:02:33.317637 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.320717 kubelet[3414]: E0117 12:02:33.319894 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.320717 kubelet[3414]: W0117 12:02:33.319930 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.323669 kubelet[3414]: E0117 12:02:33.321997 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.323669 kubelet[3414]: W0117 12:02:33.322057 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.324190 kubelet[3414]: E0117 12:02:33.322089 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.324190 kubelet[3414]: E0117 12:02:33.323971 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.326129 kubelet[3414]: E0117 12:02:33.325357 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.326129 kubelet[3414]: W0117 12:02:33.325388 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.326129 kubelet[3414]: E0117 12:02:33.325417 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.326129 kubelet[3414]: E0117 12:02:33.325491 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.328331 kubelet[3414]: E0117 12:02:33.327396 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.328331 kubelet[3414]: W0117 12:02:33.327456 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.328331 kubelet[3414]: E0117 12:02:33.327615 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.332314 kubelet[3414]: E0117 12:02:33.332233 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.332314 kubelet[3414]: W0117 12:02:33.332268 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.333027 kubelet[3414]: E0117 12:02:33.332526 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.334561 kubelet[3414]: E0117 12:02:33.334501 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.334561 kubelet[3414]: W0117 12:02:33.334551 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.335052 kubelet[3414]: E0117 12:02:33.334605 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.335269 kubelet[3414]: E0117 12:02:33.335192 3414 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:02:33.335269 kubelet[3414]: W0117 12:02:33.335216 3414 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:02:33.335269 kubelet[3414]: E0117 12:02:33.335265 3414 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:02:33.525858 containerd[2012]: time="2025-01-17T12:02:33.525778023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:33.527590 containerd[2012]: time="2025-01-17T12:02:33.527524719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 17 12:02:33.529507 containerd[2012]: time="2025-01-17T12:02:33.529413039Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:33.535137 containerd[2012]: time="2025-01-17T12:02:33.534047019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:33.535932 containerd[2012]: time="2025-01-17T12:02:33.535868883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.326234895s" Jan 17 12:02:33.536016 containerd[2012]: time="2025-01-17T12:02:33.535930839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:02:33.543540 containerd[2012]: time="2025-01-17T12:02:33.543484923Z" level=info msg="CreateContainer within sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:02:33.577349 containerd[2012]: time="2025-01-17T12:02:33.577290879Z" level=info msg="CreateContainer within sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70\"" Jan 17 12:02:33.578447 containerd[2012]: time="2025-01-17T12:02:33.578241543Z" level=info msg="StartContainer for \"c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70\"" Jan 17 12:02:33.651799 systemd[1]: Started cri-containerd-c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70.scope - libcontainer container c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70. Jan 17 12:02:33.716799 containerd[2012]: time="2025-01-17T12:02:33.716712496Z" level=info msg="StartContainer for \"c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70\" returns successfully" Jan 17 12:02:33.762092 systemd[1]: cri-containerd-c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70.scope: Deactivated successfully. Jan 17 12:02:34.110005 containerd[2012]: time="2025-01-17T12:02:34.109894849Z" level=info msg="shim disconnected" id=c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70 namespace=k8s.io Jan 17 12:02:34.110344 containerd[2012]: time="2025-01-17T12:02:34.109994377Z" level=warning msg="cleaning up after shim disconnected" id=c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70 namespace=k8s.io Jan 17 12:02:34.110344 containerd[2012]: time="2025-01-17T12:02:34.110047201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:34.224086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70-rootfs.mount: Deactivated successfully. Jan 17 12:02:34.267365 kubelet[3414]: I0117 12:02:34.267052 3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:34.278871 containerd[2012]: time="2025-01-17T12:02:34.274801982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:02:34.330890 kubelet[3414]: I0117 12:02:34.330793 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84677f87bd-7lz8b" podStartSLOduration=2.995204403 podStartE2EDuration="5.330768087s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="2025-01-17 12:02:29.872787912 +0000 UTC m=+17.056236421" lastFinishedPulling="2025-01-17 12:02:32.2083515 +0000 UTC m=+19.391800105" observedRunningTime="2025-01-17 12:02:33.290823433 +0000 UTC m=+20.474271966" watchObservedRunningTime="2025-01-17 12:02:34.330768087 +0000 UTC m=+21.514216608" Jan 17 12:02:35.097865 kubelet[3414]: E0117 12:02:35.096184 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:37.095844 kubelet[3414]: E0117 12:02:37.095788 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:38.102377 containerd[2012]: time="2025-01-17T12:02:38.102086537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.103647 containerd[2012]: time="2025-01-17T12:02:38.103595897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:02:38.104468 containerd[2012]: time="2025-01-17T12:02:38.104356925Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.109639 containerd[2012]: time="2025-01-17T12:02:38.109516361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.111370 containerd[2012]: time="2025-01-17T12:02:38.111165353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.836117179s" Jan 17 12:02:38.111370 containerd[2012]: time="2025-01-17T12:02:38.111222965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:02:38.116077 containerd[2012]: time="2025-01-17T12:02:38.115734329Z" level=info msg="CreateContainer within sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:02:38.137558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271269942.mount: Deactivated successfully. Jan 17 12:02:38.141747 containerd[2012]: time="2025-01-17T12:02:38.139751933Z" level=info msg="CreateContainer within sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8\"" Jan 17 12:02:38.142436 containerd[2012]: time="2025-01-17T12:02:38.142385849Z" level=info msg="StartContainer for \"ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8\"" Jan 17 12:02:38.202448 systemd[1]: run-containerd-runc-k8s.io-ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8-runc.xJK0yh.mount: Deactivated successfully. Jan 17 12:02:38.213497 systemd[1]: Started cri-containerd-ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8.scope - libcontainer container ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8. Jan 17 12:02:38.267120 containerd[2012]: time="2025-01-17T12:02:38.267028506Z" level=info msg="StartContainer for \"ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8\" returns successfully" Jan 17 12:02:39.097730 kubelet[3414]: E0117 12:02:39.096278 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:39.215093 containerd[2012]: time="2025-01-17T12:02:39.214978063Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:02:39.222428 systemd[1]: cri-containerd-ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8.scope: Deactivated successfully. Jan 17 12:02:39.276561 kubelet[3414]: I0117 12:02:39.272700 3414 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:02:39.273480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8-rootfs.mount: Deactivated successfully. Jan 17 12:02:39.363050 kubelet[3414]: I0117 12:02:39.358300 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf9520af-e465-4fb3-a051-dd1a6f804ee7-config-volume\") pod \"coredns-6f6b679f8f-lfw8l\" (UID: \"bf9520af-e465-4fb3-a051-dd1a6f804ee7\") " pod="kube-system/coredns-6f6b679f8f-lfw8l" Jan 17 12:02:39.363050 kubelet[3414]: I0117 12:02:39.362999 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zgp\" (UniqueName: \"kubernetes.io/projected/bf9520af-e465-4fb3-a051-dd1a6f804ee7-kube-api-access-79zgp\") pod \"coredns-6f6b679f8f-lfw8l\" (UID: \"bf9520af-e465-4fb3-a051-dd1a6f804ee7\") " pod="kube-system/coredns-6f6b679f8f-lfw8l" Jan 17 12:02:39.363390 kubelet[3414]: W0117 12:02:39.362860 3414 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-94" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-94' and this object Jan 17 12:02:39.363390 kubelet[3414]: E0117 12:02:39.363094 3414 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-18-94\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-94' and this object" logger="UnhandledError" Jan 17 12:02:39.374155 systemd[1]: Created slice kubepods-burstable-podbf9520af_e465_4fb3_a051_dd1a6f804ee7.slice - libcontainer container kubepods-burstable-podbf9520af_e465_4fb3_a051_dd1a6f804ee7.slice. Jan 17 12:02:39.392081 systemd[1]: Created slice kubepods-besteffort-podc15c08ff_5739_4693_9145_35518ef5e967.slice - libcontainer container kubepods-besteffort-podc15c08ff_5739_4693_9145_35518ef5e967.slice. Jan 17 12:02:39.411878 systemd[1]: Created slice kubepods-besteffort-pod2f8e24b8_bf29_49fa_8b29_d56d50f12a1a.slice - libcontainer container kubepods-besteffort-pod2f8e24b8_bf29_49fa_8b29_d56d50f12a1a.slice. Jan 17 12:02:39.435885 systemd[1]: Created slice kubepods-besteffort-pode1cc8dfd_5e75_4e6f_8077_56dce433bbfe.slice - libcontainer container kubepods-besteffort-pode1cc8dfd_5e75_4e6f_8077_56dce433bbfe.slice. Jan 17 12:02:39.455898 systemd[1]: Created slice kubepods-burstable-pod5fdb21f1_f207_473a_b571_9a91d733fe50.slice - libcontainer container kubepods-burstable-pod5fdb21f1_f207_473a_b571_9a91d733fe50.slice. Jan 17 12:02:39.473417 kubelet[3414]: I0117 12:02:39.473362 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f8e24b8-bf29-49fa-8b29-d56d50f12a1a-calico-apiserver-certs\") pod \"calico-apiserver-db4fb4c5-2g54x\" (UID: \"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a\") " pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" Jan 17 12:02:39.473586 kubelet[3414]: I0117 12:02:39.473448 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplrg\" (UniqueName: \"kubernetes.io/projected/e1cc8dfd-5e75-4e6f-8077-56dce433bbfe-kube-api-access-rplrg\") pod \"calico-apiserver-db4fb4c5-qr4qm\" (UID: \"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe\") " pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" Jan 17 12:02:39.473586 kubelet[3414]: I0117 12:02:39.473501 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fdb21f1-f207-473a-b571-9a91d733fe50-config-volume\") pod \"coredns-6f6b679f8f-sgl59\" (UID: \"5fdb21f1-f207-473a-b571-9a91d733fe50\") " pod="kube-system/coredns-6f6b679f8f-sgl59" Jan 17 12:02:39.473586 kubelet[3414]: I0117 12:02:39.473555 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2f5\" (UniqueName: \"kubernetes.io/projected/5fdb21f1-f207-473a-b571-9a91d733fe50-kube-api-access-7r2f5\") pod \"coredns-6f6b679f8f-sgl59\" (UID: \"5fdb21f1-f207-473a-b571-9a91d733fe50\") " pod="kube-system/coredns-6f6b679f8f-sgl59" Jan 17 12:02:39.473886 kubelet[3414]: I0117 12:02:39.473600 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15c08ff-5739-4693-9145-35518ef5e967-tigera-ca-bundle\") pod \"calico-kube-controllers-6c7869774c-nt5td\" (UID: \"c15c08ff-5739-4693-9145-35518ef5e967\") " pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" Jan 17 12:02:39.473886 kubelet[3414]: I0117 12:02:39.473654 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84pnm\" (UniqueName: \"kubernetes.io/projected/c15c08ff-5739-4693-9145-35518ef5e967-kube-api-access-84pnm\") pod \"calico-kube-controllers-6c7869774c-nt5td\" (UID: \"c15c08ff-5739-4693-9145-35518ef5e967\") " pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" Jan 17 12:02:39.473886 kubelet[3414]: I0117 12:02:39.473711 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1cc8dfd-5e75-4e6f-8077-56dce433bbfe-calico-apiserver-certs\") pod \"calico-apiserver-db4fb4c5-qr4qm\" (UID: \"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe\") " pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" Jan 17 12:02:39.473886 kubelet[3414]: I0117 12:02:39.473763 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nhcq\" (UniqueName: \"kubernetes.io/projected/2f8e24b8-bf29-49fa-8b29-d56d50f12a1a-kube-api-access-8nhcq\") pod \"calico-apiserver-db4fb4c5-2g54x\" (UID: \"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a\") " pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" Jan 17 12:02:39.706210 containerd[2012]: time="2025-01-17T12:02:39.705335421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7869774c-nt5td,Uid:c15c08ff-5739-4693-9145-35518ef5e967,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:39.720830 containerd[2012]: time="2025-01-17T12:02:39.720739821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-2g54x,Uid:2f8e24b8-bf29-49fa-8b29-d56d50f12a1a,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:02:39.746391 containerd[2012]: time="2025-01-17T12:02:39.746295213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-qr4qm,Uid:e1cc8dfd-5e75-4e6f-8077-56dce433bbfe,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:02:40.139579 containerd[2012]: time="2025-01-17T12:02:40.139440331Z" level=info msg="shim disconnected" id=ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8 namespace=k8s.io Jan 17 12:02:40.139579 containerd[2012]: time="2025-01-17T12:02:40.139532923Z" level=warning msg="cleaning up after shim disconnected" id=ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8 namespace=k8s.io Jan 17 12:02:40.139579 containerd[2012]: time="2025-01-17T12:02:40.139555687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:40.323579 containerd[2012]: time="2025-01-17T12:02:40.322664684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:02:40.397058 containerd[2012]: time="2025-01-17T12:02:40.396782133Z" level=error msg="Failed to destroy network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.398314 containerd[2012]: time="2025-01-17T12:02:40.397646049Z" level=error msg="encountered an error cleaning up failed sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.401692 containerd[2012]: time="2025-01-17T12:02:40.400298337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7869774c-nt5td,Uid:c15c08ff-5739-4693-9145-35518ef5e967,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.401936 kubelet[3414]: E0117 12:02:40.401803 3414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.401936 kubelet[3414]: E0117 12:02:40.401898 3414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" Jan 17 12:02:40.404930 kubelet[3414]: E0117 12:02:40.401932 3414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" Jan 17 12:02:40.404930 kubelet[3414]: E0117 12:02:40.402009 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c7869774c-nt5td_calico-system(c15c08ff-5739-4693-9145-35518ef5e967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c7869774c-nt5td_calico-system(c15c08ff-5739-4693-9145-35518ef5e967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" podUID="c15c08ff-5739-4693-9145-35518ef5e967" Jan 17 12:02:40.406888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54-shm.mount: Deactivated successfully. Jan 17 12:02:40.426164 containerd[2012]: time="2025-01-17T12:02:40.425593209Z" level=error msg="Failed to destroy network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.428034 containerd[2012]: time="2025-01-17T12:02:40.427934553Z" level=error msg="encountered an error cleaning up failed sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.429399 containerd[2012]: time="2025-01-17T12:02:40.428055069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-2g54x,Uid:2f8e24b8-bf29-49fa-8b29-d56d50f12a1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.430656 kubelet[3414]: E0117 12:02:40.429616 3414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.430656 kubelet[3414]: E0117 12:02:40.429707 3414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" Jan 17 12:02:40.430656 kubelet[3414]: E0117 12:02:40.429739 3414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" Jan 17 12:02:40.430917 containerd[2012]: time="2025-01-17T12:02:40.430384977Z" level=error msg="Failed to destroy network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.430985 kubelet[3414]: E0117 12:02:40.429820 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db4fb4c5-2g54x_calico-apiserver(2f8e24b8-bf29-49fa-8b29-d56d50f12a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db4fb4c5-2g54x_calico-apiserver(2f8e24b8-bf29-49fa-8b29-d56d50f12a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" podUID="2f8e24b8-bf29-49fa-8b29-d56d50f12a1a" Jan 17 12:02:40.432533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760-shm.mount: Deactivated successfully. Jan 17 12:02:40.434557 containerd[2012]: time="2025-01-17T12:02:40.433524897Z" level=error msg="encountered an error cleaning up failed sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.434557 containerd[2012]: time="2025-01-17T12:02:40.433629633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-qr4qm,Uid:e1cc8dfd-5e75-4e6f-8077-56dce433bbfe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.434819 kubelet[3414]: E0117 12:02:40.434307 3414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:40.434819 kubelet[3414]: E0117 12:02:40.434382 3414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" Jan 17 12:02:40.434819 kubelet[3414]: E0117 12:02:40.434420 3414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" Jan 17 12:02:40.435014 kubelet[3414]: E0117 12:02:40.434478 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db4fb4c5-qr4qm_calico-apiserver(e1cc8dfd-5e75-4e6f-8077-56dce433bbfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db4fb4c5-qr4qm_calico-apiserver(e1cc8dfd-5e75-4e6f-8077-56dce433bbfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" podUID="e1cc8dfd-5e75-4e6f-8077-56dce433bbfe" Jan 17 12:02:40.443189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf-shm.mount: Deactivated successfully. Jan 17 12:02:40.481074 kubelet[3414]: E0117 12:02:40.481025 3414 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:02:40.481261 kubelet[3414]: E0117 12:02:40.481156 3414 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf9520af-e465-4fb3-a051-dd1a6f804ee7-config-volume podName:bf9520af-e465-4fb3-a051-dd1a6f804ee7 nodeName:}" failed. No retries permitted until 2025-01-17 12:02:40.981125501 +0000 UTC m=+28.164574022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bf9520af-e465-4fb3-a051-dd1a6f804ee7-config-volume") pod "coredns-6f6b679f8f-lfw8l" (UID: "bf9520af-e465-4fb3-a051-dd1a6f804ee7") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:02:40.578925 kubelet[3414]: E0117 12:02:40.578860 3414 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:02:40.579091 kubelet[3414]: E0117 12:02:40.578980 3414 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fdb21f1-f207-473a-b571-9a91d733fe50-config-volume podName:5fdb21f1-f207-473a-b571-9a91d733fe50 nodeName:}" failed. No retries permitted until 2025-01-17 12:02:41.07895249 +0000 UTC m=+28.262401011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5fdb21f1-f207-473a-b571-9a91d733fe50-config-volume") pod "coredns-6f6b679f8f-sgl59" (UID: "5fdb21f1-f207-473a-b571-9a91d733fe50") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:02:41.106800 systemd[1]: Created slice kubepods-besteffort-pod29853ccc_52db_41c3_8f83_89ba39b0f309.slice - libcontainer container kubepods-besteffort-pod29853ccc_52db_41c3_8f83_89ba39b0f309.slice. Jan 17 12:02:41.111643 containerd[2012]: time="2025-01-17T12:02:41.111572228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7t526,Uid:29853ccc-52db-41c3-8f83-89ba39b0f309,Namespace:calico-system,Attempt:0,}" Jan 17 12:02:41.186567 containerd[2012]: time="2025-01-17T12:02:41.185917413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lfw8l,Uid:bf9520af-e465-4fb3-a051-dd1a6f804ee7,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:41.238517 containerd[2012]: time="2025-01-17T12:02:41.238440069Z" level=error msg="Failed to destroy network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.239083 containerd[2012]: time="2025-01-17T12:02:41.239011329Z" level=error msg="encountered an error cleaning up failed sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.239222 containerd[2012]: time="2025-01-17T12:02:41.239160957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7t526,Uid:29853ccc-52db-41c3-8f83-89ba39b0f309,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.239527 kubelet[3414]: E0117 12:02:41.239461 3414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.239638 kubelet[3414]: E0117 12:02:41.239549 3414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:41.239638 kubelet[3414]: E0117 12:02:41.239590 3414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7t526" Jan 17 12:02:41.239755 kubelet[3414]: E0117 12:02:41.239657 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7t526_calico-system(29853ccc-52db-41c3-8f83-89ba39b0f309)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7t526_calico-system(29853ccc-52db-41c3-8f83-89ba39b0f309)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:41.277479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048-shm.mount: Deactivated successfully. Jan 17 12:02:41.282534 containerd[2012]: time="2025-01-17T12:02:41.282206973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sgl59,Uid:5fdb21f1-f207-473a-b571-9a91d733fe50,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:41.341893 containerd[2012]: time="2025-01-17T12:02:41.338692245Z" level=error msg="Failed to destroy network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.343913 kubelet[3414]: I0117 12:02:41.340406 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:02:41.351221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826-shm.mount: Deactivated successfully. Jan 17 12:02:41.353008 containerd[2012]: time="2025-01-17T12:02:41.352138401Z" level=info msg="StopPodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\"" Jan 17 12:02:41.353008 containerd[2012]: time="2025-01-17T12:02:41.352467753Z" level=info msg="Ensure that sandbox 8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760 in task-service has been cleanup successfully" Jan 17 12:02:41.360129 containerd[2012]: time="2025-01-17T12:02:41.359881437Z" level=error msg="encountered an error cleaning up failed sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.360269 containerd[2012]: time="2025-01-17T12:02:41.360077457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lfw8l,Uid:bf9520af-e465-4fb3-a051-dd1a6f804ee7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.362615 kubelet[3414]: E0117 12:02:41.361920 3414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.362615 kubelet[3414]: E0117 12:02:41.362018 3414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lfw8l" Jan 17 12:02:41.362615 kubelet[3414]: E0117 12:02:41.362086 3414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lfw8l" Jan 17 12:02:41.362903 kubelet[3414]: E0117 12:02:41.362240 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lfw8l_kube-system(bf9520af-e465-4fb3-a051-dd1a6f804ee7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lfw8l_kube-system(bf9520af-e465-4fb3-a051-dd1a6f804ee7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lfw8l" podUID="bf9520af-e465-4fb3-a051-dd1a6f804ee7" Jan 17 12:02:41.369434 kubelet[3414]: I0117 12:02:41.369034 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:02:41.379608 containerd[2012]: time="2025-01-17T12:02:41.379449862Z" level=info msg="StopPodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\"" Jan 17 12:02:41.380310 containerd[2012]: time="2025-01-17T12:02:41.380072950Z" level=info msg="Ensure that sandbox 88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf in task-service has been cleanup successfully" Jan 17 12:02:41.390599 kubelet[3414]: I0117 12:02:41.388923 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:02:41.394157 containerd[2012]: time="2025-01-17T12:02:41.393712402Z" level=info msg="StopPodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\"" Jan 17 12:02:41.399679 containerd[2012]: time="2025-01-17T12:02:41.398016550Z" level=info msg="Ensure that sandbox ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54 in task-service has been cleanup successfully" Jan 17 12:02:41.405532 kubelet[3414]: I0117 12:02:41.405474 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:02:41.411403 containerd[2012]: time="2025-01-17T12:02:41.411008074Z" level=info msg="StopPodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\"" Jan 17 12:02:41.418155 containerd[2012]: time="2025-01-17T12:02:41.417999730Z" level=info msg="Ensure that sandbox d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048 in task-service has been cleanup successfully" Jan 17 12:02:41.530743 containerd[2012]: time="2025-01-17T12:02:41.530177962Z" level=error msg="StopPodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" failed" error="failed to destroy network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.531640 kubelet[3414]: E0117 12:02:41.531381 3414 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:02:41.531640 kubelet[3414]: E0117 12:02:41.531461 3414 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760"} Jan 17 12:02:41.531640 kubelet[3414]: E0117 12:02:41.531546 3414 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:41.531640 kubelet[3414]: E0117 12:02:41.531591 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" podUID="2f8e24b8-bf29-49fa-8b29-d56d50f12a1a" Jan 17 12:02:41.590272 containerd[2012]: time="2025-01-17T12:02:41.590192627Z" level=error msg="StopPodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" failed" error="failed to destroy network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.591065 kubelet[3414]: E0117 12:02:41.590573 3414 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:02:41.591065 kubelet[3414]: E0117 12:02:41.590659 3414 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf"} Jan 17 12:02:41.591065 kubelet[3414]: E0117 12:02:41.590715 3414 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:41.591065 kubelet[3414]: E0117 12:02:41.590753 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" podUID="e1cc8dfd-5e75-4e6f-8077-56dce433bbfe" Jan 17 12:02:41.612040 containerd[2012]: time="2025-01-17T12:02:41.611824415Z" level=error msg="StopPodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" failed" error="failed to destroy network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.614640 kubelet[3414]: E0117 12:02:41.614417 3414 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:02:41.614640 kubelet[3414]: E0117 12:02:41.614493 3414 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54"} Jan 17 12:02:41.614640 kubelet[3414]: E0117 12:02:41.614546 3414 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c15c08ff-5739-4693-9145-35518ef5e967\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:41.614640 kubelet[3414]: E0117 12:02:41.614591 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c15c08ff-5739-4693-9145-35518ef5e967\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" podUID="c15c08ff-5739-4693-9145-35518ef5e967" Jan 17 12:02:41.642884 containerd[2012]: time="2025-01-17T12:02:41.641311259Z" level=error msg="StopPodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" failed" error="failed to destroy network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.643038 kubelet[3414]: E0117 12:02:41.642908 3414 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:02:41.643038 kubelet[3414]: E0117 12:02:41.642982 3414 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048"} Jan 17 12:02:41.643234 kubelet[3414]: E0117 12:02:41.643038 3414 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29853ccc-52db-41c3-8f83-89ba39b0f309\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:41.643234 kubelet[3414]: E0117 12:02:41.643076 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29853ccc-52db-41c3-8f83-89ba39b0f309\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7t526" podUID="29853ccc-52db-41c3-8f83-89ba39b0f309" Jan 17 12:02:41.671179 containerd[2012]: time="2025-01-17T12:02:41.671058983Z" level=error msg="Failed to destroy network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.675197 containerd[2012]: time="2025-01-17T12:02:41.671926091Z" level=error msg="encountered an error cleaning up failed sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.675197 containerd[2012]: time="2025-01-17T12:02:41.672029207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sgl59,Uid:5fdb21f1-f207-473a-b571-9a91d733fe50,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.675404 kubelet[3414]: E0117 12:02:41.673186 3414 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:41.675404 kubelet[3414]: E0117 12:02:41.673261 3414 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sgl59" Jan 17 12:02:41.675404 kubelet[3414]: E0117 12:02:41.673293 3414 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sgl59" Jan 17 12:02:41.675605 kubelet[3414]: E0117 12:02:41.673366 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sgl59_kube-system(5fdb21f1-f207-473a-b571-9a91d733fe50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sgl59_kube-system(5fdb21f1-f207-473a-b571-9a91d733fe50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sgl59" podUID="5fdb21f1-f207-473a-b571-9a91d733fe50" Jan 17 12:02:41.679655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908-shm.mount: Deactivated successfully. Jan 17 12:02:42.409852 kubelet[3414]: I0117 12:02:42.409805 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:02:42.412851 containerd[2012]: time="2025-01-17T12:02:42.412751111Z" level=info msg="StopPodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\"" Jan 17 12:02:42.417545 containerd[2012]: time="2025-01-17T12:02:42.413064131Z" level=info msg="Ensure that sandbox cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908 in task-service has been cleanup successfully" Jan 17 12:02:42.423742 kubelet[3414]: I0117 12:02:42.423648 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:02:42.426196 containerd[2012]: time="2025-01-17T12:02:42.425125055Z" level=info msg="StopPodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\"" Jan 17 12:02:42.426196 containerd[2012]: time="2025-01-17T12:02:42.425468987Z" level=info msg="Ensure that sandbox 1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826 in task-service has been cleanup successfully" Jan 17 12:02:42.510569 containerd[2012]: time="2025-01-17T12:02:42.510398963Z" level=error msg="StopPodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" failed" error="failed to destroy network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:42.510852 kubelet[3414]: E0117 12:02:42.510718 3414 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:02:42.510852 kubelet[3414]: E0117 12:02:42.510791 3414 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908"} Jan 17 12:02:42.510852 kubelet[3414]: E0117 12:02:42.510844 3414 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fdb21f1-f207-473a-b571-9a91d733fe50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:42.511473 kubelet[3414]: E0117 12:02:42.510881 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fdb21f1-f207-473a-b571-9a91d733fe50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sgl59" podUID="5fdb21f1-f207-473a-b571-9a91d733fe50" Jan 17 12:02:42.522947 containerd[2012]: time="2025-01-17T12:02:42.522552755Z" level=error msg="StopPodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" failed" error="failed to destroy network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:02:42.523419 kubelet[3414]: E0117 12:02:42.523297 3414 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:02:42.523419 kubelet[3414]: E0117 12:02:42.523379 3414 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826"} Jan 17 12:02:42.523651 kubelet[3414]: E0117 12:02:42.523434 3414 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf9520af-e465-4fb3-a051-dd1a6f804ee7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:02:42.523651 kubelet[3414]: E0117 12:02:42.523478 3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf9520af-e465-4fb3-a051-dd1a6f804ee7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lfw8l" podUID="bf9520af-e465-4fb3-a051-dd1a6f804ee7" Jan 17 12:02:46.995890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288508519.mount: Deactivated successfully. Jan 17 12:02:47.075973 containerd[2012]: time="2025-01-17T12:02:47.075883334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:47.077574 containerd[2012]: time="2025-01-17T12:02:47.077486042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:02:47.078607 containerd[2012]: time="2025-01-17T12:02:47.078514514Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:47.083290 containerd[2012]: time="2025-01-17T12:02:47.083218706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:47.084757 containerd[2012]: time="2025-01-17T12:02:47.084529886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.761795878s" Jan 17 12:02:47.084757 containerd[2012]: time="2025-01-17T12:02:47.084613526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:02:47.116032 containerd[2012]: time="2025-01-17T12:02:47.115459346Z" level=info msg="CreateContainer within sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:02:47.142473 containerd[2012]: time="2025-01-17T12:02:47.142333814Z" level=info msg="CreateContainer within sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\"" Jan 17 12:02:47.145314 containerd[2012]: time="2025-01-17T12:02:47.144271898Z" level=info msg="StartContainer for \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\"" Jan 17 12:02:47.190417 systemd[1]: Started cri-containerd-c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d.scope - libcontainer container c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d. Jan 17 12:02:47.250190 containerd[2012]: time="2025-01-17T12:02:47.248733123Z" level=info msg="StartContainer for \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\" returns successfully" Jan 17 12:02:47.385335 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:02:47.386337 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:02:47.503788 kubelet[3414]: I0117 12:02:47.502256 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4m9hc" podStartSLOduration=1.481911803 podStartE2EDuration="18.502204852s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="2025-01-17 12:02:30.065836797 +0000 UTC m=+17.249285318" lastFinishedPulling="2025-01-17 12:02:47.086129846 +0000 UTC m=+34.269578367" observedRunningTime="2025-01-17 12:02:47.501583324 +0000 UTC m=+34.685031857" watchObservedRunningTime="2025-01-17 12:02:47.502204852 +0000 UTC m=+34.685653493" Jan 17 12:02:51.665954 kubelet[3414]: I0117 12:02:51.664633 3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:02:52.773175 kernel: bpftool[4751]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:02:53.100705 containerd[2012]: time="2025-01-17T12:02:53.099636344Z" level=info msg="StopPodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\"" Jan 17 12:02:53.107649 containerd[2012]: time="2025-01-17T12:02:53.102129272Z" level=info msg="StopPodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\"" Jan 17 12:02:53.160032 systemd-networkd[1917]: vxlan.calico: Link UP Jan 17 12:02:53.163009 systemd-networkd[1917]: vxlan.calico: Gained carrier Jan 17 12:02:53.174642 (udev-worker)[4804]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:53.271852 (udev-worker)[4820]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.426 [INFO][4796] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.426 [INFO][4796] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" iface="eth0" netns="/var/run/netns/cni-860e947f-f8bd-85da-6394-2d7af23f16da" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.428 [INFO][4796] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" iface="eth0" netns="/var/run/netns/cni-860e947f-f8bd-85da-6394-2d7af23f16da" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.432 [INFO][4796] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" iface="eth0" netns="/var/run/netns/cni-860e947f-f8bd-85da-6394-2d7af23f16da" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.433 [INFO][4796] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.433 [INFO][4796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.534 [INFO][4831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.535 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.535 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.565 [WARNING][4831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.565 [INFO][4831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.571 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:53.591390 containerd[2012]: 2025-01-17 12:02:53.581 [INFO][4796] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:02:53.596472 containerd[2012]: time="2025-01-17T12:02:53.594630838Z" level=info msg="TearDown network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" successfully" Jan 17 12:02:53.596472 containerd[2012]: time="2025-01-17T12:02:53.594682102Z" level=info msg="StopPodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" returns successfully" Jan 17 12:02:53.597218 containerd[2012]: time="2025-01-17T12:02:53.597011434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-qr4qm,Uid:e1cc8dfd-5e75-4e6f-8077-56dce433bbfe,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:02:53.603925 systemd[1]: run-netns-cni\x2d860e947f\x2df8bd\x2d85da\x2d6394\x2d2d7af23f16da.mount: Deactivated successfully. Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.418 [INFO][4791] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.420 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" iface="eth0" netns="/var/run/netns/cni-43e8a40d-46c3-c3ad-0802-63e61a3adea2" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.429 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" iface="eth0" netns="/var/run/netns/cni-43e8a40d-46c3-c3ad-0802-63e61a3adea2" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.433 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" iface="eth0" netns="/var/run/netns/cni-43e8a40d-46c3-c3ad-0802-63e61a3adea2" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.433 [INFO][4791] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.433 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.551 [INFO][4830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.555 [INFO][4830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.571 [INFO][4830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.593 [WARNING][4830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.593 [INFO][4830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.598 [INFO][4830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:53.614912 containerd[2012]: 2025-01-17 12:02:53.609 [INFO][4791] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:02:53.616492 containerd[2012]: time="2025-01-17T12:02:53.616341166Z" level=info msg="TearDown network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" successfully" Jan 17 12:02:53.616492 containerd[2012]: time="2025-01-17T12:02:53.616398418Z" level=info msg="StopPodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" returns successfully" Jan 17 12:02:53.618144 containerd[2012]: time="2025-01-17T12:02:53.617577070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sgl59,Uid:5fdb21f1-f207-473a-b571-9a91d733fe50,Namespace:kube-system,Attempt:1,}" Jan 17 12:02:53.622993 systemd[1]: run-netns-cni\x2d43e8a40d\x2d46c3\x2dc3ad\x2d0802\x2d63e61a3adea2.mount: Deactivated successfully. Jan 17 12:02:54.008537 (udev-worker)[4827]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:54.019906 systemd-networkd[1917]: cali9f0df4398c7: Link UP Jan 17 12:02:54.021151 systemd-networkd[1917]: cali9f0df4398c7: Gained carrier Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.800 [INFO][4852] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0 coredns-6f6b679f8f- kube-system 5fdb21f1-f207-473a-b571-9a91d733fe50 815 0 2025-01-17 12:02:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-94 coredns-6f6b679f8f-sgl59 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9f0df4398c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.800 [INFO][4852] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.903 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" HandleID="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.929 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" HandleID="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005c43d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-94", "pod":"coredns-6f6b679f8f-sgl59", "timestamp":"2025-01-17 12:02:53.903273984 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.929 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.929 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.929 [INFO][4883] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.934 [INFO][4883] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.945 [INFO][4883] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.954 [INFO][4883] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.960 [INFO][4883] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.963 [INFO][4883] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.964 [INFO][4883] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.972 [INFO][4883] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.985 [INFO][4883] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.997 [INFO][4883] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.1/26] block=192.168.29.0/26 handle="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.998 [INFO][4883] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.1/26] handle="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" host="ip-172-31-18-94" Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.999 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:54.065479 containerd[2012]: 2025-01-17 12:02:53.999 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.1/26] IPv6=[] ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" HandleID="k8s-pod-network.2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.068386 containerd[2012]: 2025-01-17 12:02:54.003 [INFO][4852] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5fdb21f1-f207-473a-b571-9a91d733fe50", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"coredns-6f6b679f8f-sgl59", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f0df4398c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:54.068386 containerd[2012]: 2025-01-17 12:02:54.003 [INFO][4852] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.1/32] ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.068386 containerd[2012]: 2025-01-17 12:02:54.003 [INFO][4852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f0df4398c7 ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.068386 containerd[2012]: 2025-01-17 12:02:54.016 [INFO][4852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.068386 containerd[2012]: 2025-01-17 12:02:54.016 [INFO][4852] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5fdb21f1-f207-473a-b571-9a91d733fe50", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f", Pod:"coredns-6f6b679f8f-sgl59", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f0df4398c7", MAC:"5e:41:9a:81:84:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:54.068386 containerd[2012]: 2025-01-17 12:02:54.056 [INFO][4852] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f" Namespace="kube-system" Pod="coredns-6f6b679f8f-sgl59" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:02:54.099762 containerd[2012]: time="2025-01-17T12:02:54.098380329Z" level=info msg="StopPodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\"" Jan 17 12:02:54.148533 systemd-networkd[1917]: cali79fa5281821: Link UP Jan 17 12:02:54.153088 systemd-networkd[1917]: cali79fa5281821: Gained carrier Jan 17 12:02:54.187891 containerd[2012]: time="2025-01-17T12:02:54.183087789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:54.201779 containerd[2012]: time="2025-01-17T12:02:54.198081105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:54.201779 containerd[2012]: time="2025-01-17T12:02:54.201706125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.773 [INFO][4843] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0 calico-apiserver-db4fb4c5- calico-apiserver e1cc8dfd-5e75-4e6f-8077-56dce433bbfe 816 0 2025-01-17 12:02:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db4fb4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-94 calico-apiserver-db4fb4c5-qr4qm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79fa5281821 [] []}} ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.774 [INFO][4843] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.925 [INFO][4877] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" HandleID="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.948 [INFO][4877] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" HandleID="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-94", "pod":"calico-apiserver-db4fb4c5-qr4qm", "timestamp":"2025-01-17 12:02:53.924988368 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.948 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.999 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:53.999 [INFO][4877] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.037 [INFO][4877] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.062 [INFO][4877] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.077 [INFO][4877] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.083 [INFO][4877] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.090 [INFO][4877] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.091 [INFO][4877] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.095 [INFO][4877] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020 Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.115 [INFO][4877] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.134 [INFO][4877] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.2/26] block=192.168.29.0/26 handle="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.134 [INFO][4877] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.2/26] handle="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" host="ip-172-31-18-94" Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.134 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:54.205299 containerd[2012]: 2025-01-17 12:02:54.134 [INFO][4877] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.2/26] IPv6=[] ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" HandleID="k8s-pod-network.15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.206582 containerd[2012]: 2025-01-17 12:02:54.141 [INFO][4843] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"calico-apiserver-db4fb4c5-qr4qm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79fa5281821", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:54.206582 containerd[2012]: 2025-01-17 12:02:54.142 [INFO][4843] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.2/32] ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.206582 containerd[2012]: 2025-01-17 12:02:54.142 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79fa5281821 ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.206582 containerd[2012]: 2025-01-17 12:02:54.152 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.206582 containerd[2012]: 2025-01-17 12:02:54.156 [INFO][4843] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020", Pod:"calico-apiserver-db4fb4c5-qr4qm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79fa5281821", MAC:"32:94:8f:71:4d:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:54.206582 containerd[2012]: 2025-01-17 12:02:54.199 [INFO][4843] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-qr4qm" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:02:54.208319 containerd[2012]: time="2025-01-17T12:02:54.206697141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:54.269497 systemd[1]: Started cri-containerd-2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f.scope - libcontainer container 2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f. Jan 17 12:02:54.334719 containerd[2012]: time="2025-01-17T12:02:54.333464002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:54.334719 containerd[2012]: time="2025-01-17T12:02:54.333578854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:54.334719 containerd[2012]: time="2025-01-17T12:02:54.333605206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:54.335085 containerd[2012]: time="2025-01-17T12:02:54.334733134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:54.405446 systemd[1]: Started cri-containerd-15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020.scope - libcontainer container 15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020. Jan 17 12:02:54.425879 containerd[2012]: time="2025-01-17T12:02:54.425663326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sgl59,Uid:5fdb21f1-f207-473a-b571-9a91d733fe50,Namespace:kube-system,Attempt:1,} returns sandbox id \"2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f\"" Jan 17 12:02:54.434738 containerd[2012]: time="2025-01-17T12:02:54.434613922Z" level=info msg="CreateContainer within sandbox \"2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:02:54.482589 containerd[2012]: time="2025-01-17T12:02:54.482510639Z" level=info msg="CreateContainer within sandbox \"2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"199536c59ab2595f97aa37191d13e33ca0cdce68f210ed5173dc324a998d3233\"" Jan 17 12:02:54.488652 containerd[2012]: time="2025-01-17T12:02:54.487051127Z" level=info msg="StartContainer for \"199536c59ab2595f97aa37191d13e33ca0cdce68f210ed5173dc324a998d3233\"" Jan 17 12:02:54.529325 containerd[2012]: time="2025-01-17T12:02:54.529178603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-qr4qm,Uid:e1cc8dfd-5e75-4e6f-8077-56dce433bbfe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020\"" Jan 17 12:02:54.534744 containerd[2012]: time="2025-01-17T12:02:54.534697319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.389 [INFO][4944] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.390 [INFO][4944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" iface="eth0" netns="/var/run/netns/cni-cd7991c2-4144-b2b3-f33e-d652895331ee" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.391 [INFO][4944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" iface="eth0" netns="/var/run/netns/cni-cd7991c2-4144-b2b3-f33e-d652895331ee" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.399 [INFO][4944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" iface="eth0" netns="/var/run/netns/cni-cd7991c2-4144-b2b3-f33e-d652895331ee" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.400 [INFO][4944] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.400 [INFO][4944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.539 [INFO][5024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.540 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.540 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.554 [WARNING][5024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.554 [INFO][5024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.557 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:54.566236 containerd[2012]: 2025-01-17 12:02:54.561 [INFO][4944] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:02:54.567053 containerd[2012]: time="2025-01-17T12:02:54.566611691Z" level=info msg="TearDown network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" successfully" Jan 17 12:02:54.567053 containerd[2012]: time="2025-01-17T12:02:54.566650259Z" level=info msg="StopPodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" returns successfully" Jan 17 12:02:54.568845 containerd[2012]: time="2025-01-17T12:02:54.568655603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7t526,Uid:29853ccc-52db-41c3-8f83-89ba39b0f309,Namespace:calico-system,Attempt:1,}" Jan 17 12:02:54.586478 systemd[1]: Started cri-containerd-199536c59ab2595f97aa37191d13e33ca0cdce68f210ed5173dc324a998d3233.scope - libcontainer container 199536c59ab2595f97aa37191d13e33ca0cdce68f210ed5173dc324a998d3233. Jan 17 12:02:54.605202 systemd[1]: run-netns-cni\x2dcd7991c2\x2d4144\x2db2b3\x2df33e\x2dd652895331ee.mount: Deactivated successfully. Jan 17 12:02:54.682484 systemd-networkd[1917]: vxlan.calico: Gained IPv6LL Jan 17 12:02:54.709524 containerd[2012]: time="2025-01-17T12:02:54.708538632Z" level=info msg="StartContainer for \"199536c59ab2595f97aa37191d13e33ca0cdce68f210ed5173dc324a998d3233\" returns successfully" Jan 17 12:02:54.897056 systemd-networkd[1917]: caliea7b69308ac: Link UP Jan 17 12:02:54.903095 systemd-networkd[1917]: caliea7b69308ac: Gained carrier Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.739 [INFO][5072] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0 csi-node-driver- calico-system 29853ccc-52db-41c3-8f83-89ba39b0f309 827 0 2025-01-17 12:02:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-94 csi-node-driver-7t526 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea7b69308ac [] []}} ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.740 [INFO][5072] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.819 [INFO][5093] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" HandleID="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.839 [INFO][5093] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" HandleID="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d260), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-94", "pod":"csi-node-driver-7t526", "timestamp":"2025-01-17 12:02:54.819912 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.839 [INFO][5093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.840 [INFO][5093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.840 [INFO][5093] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.843 [INFO][5093] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.850 [INFO][5093] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.857 [INFO][5093] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.860 [INFO][5093] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.864 [INFO][5093] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.864 [INFO][5093] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.866 [INFO][5093] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.873 [INFO][5093] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.887 [INFO][5093] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.3/26] block=192.168.29.0/26 handle="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.887 [INFO][5093] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.3/26] handle="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" host="ip-172-31-18-94" Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.887 [INFO][5093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:54.934586 containerd[2012]: 2025-01-17 12:02:54.887 [INFO][5093] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.3/26] IPv6=[] ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" HandleID="k8s-pod-network.41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.937602 containerd[2012]: 2025-01-17 12:02:54.892 [INFO][5072] cni-plugin/k8s.go 386: Populated endpoint ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"29853ccc-52db-41c3-8f83-89ba39b0f309", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"csi-node-driver-7t526", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea7b69308ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:54.937602 containerd[2012]: 2025-01-17 12:02:54.892 [INFO][5072] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.3/32] ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.937602 containerd[2012]: 2025-01-17 12:02:54.892 [INFO][5072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea7b69308ac ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.937602 containerd[2012]: 2025-01-17 12:02:54.905 [INFO][5072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.937602 containerd[2012]: 2025-01-17 12:02:54.907 [INFO][5072] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"29853ccc-52db-41c3-8f83-89ba39b0f309", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e", Pod:"csi-node-driver-7t526", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea7b69308ac", MAC:"06:e6:51:6e:9f:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:54.937602 containerd[2012]: 2025-01-17 12:02:54.929 [INFO][5072] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e" Namespace="calico-system" Pod="csi-node-driver-7t526" WorkloadEndpoint="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:02:54.977618 containerd[2012]: time="2025-01-17T12:02:54.976948225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:54.977618 containerd[2012]: time="2025-01-17T12:02:54.977064361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:54.977618 containerd[2012]: time="2025-01-17T12:02:54.977156833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:54.977618 containerd[2012]: time="2025-01-17T12:02:54.977332813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:55.027428 systemd[1]: Started cri-containerd-41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e.scope - libcontainer container 41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e. Jan 17 12:02:55.072054 containerd[2012]: time="2025-01-17T12:02:55.071986882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7t526,Uid:29853ccc-52db-41c3-8f83-89ba39b0f309,Namespace:calico-system,Attempt:1,} returns sandbox id \"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e\"" Jan 17 12:02:55.100867 containerd[2012]: time="2025-01-17T12:02:55.099677398Z" level=info msg="StopPodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\"" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.188 [INFO][5169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.189 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" iface="eth0" netns="/var/run/netns/cni-d0dd72f8-b031-b1bd-b02d-c60c5f764e65" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.191 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" iface="eth0" netns="/var/run/netns/cni-d0dd72f8-b031-b1bd-b02d-c60c5f764e65" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.192 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" iface="eth0" netns="/var/run/netns/cni-d0dd72f8-b031-b1bd-b02d-c60c5f764e65" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.192 [INFO][5169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.192 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.237 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.237 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.238 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.249 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.249 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.252 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:55.257989 containerd[2012]: 2025-01-17 12:02:55.255 [INFO][5169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:02:55.260373 containerd[2012]: time="2025-01-17T12:02:55.260306267Z" level=info msg="TearDown network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" successfully" Jan 17 12:02:55.261041 containerd[2012]: time="2025-01-17T12:02:55.260372099Z" level=info msg="StopPodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" returns successfully" Jan 17 12:02:55.261699 containerd[2012]: time="2025-01-17T12:02:55.261621899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-2g54x,Uid:2f8e24b8-bf29-49fa-8b29-d56d50f12a1a,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:02:55.487942 systemd-networkd[1917]: cali15ab67e928a: Link UP Jan 17 12:02:55.492567 systemd-networkd[1917]: cali15ab67e928a: Gained carrier Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.348 [INFO][5182] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0 calico-apiserver-db4fb4c5- calico-apiserver 2f8e24b8-bf29-49fa-8b29-d56d50f12a1a 838 0 2025-01-17 12:02:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db4fb4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-94 calico-apiserver-db4fb4c5-2g54x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15ab67e928a [] []}} ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.348 [INFO][5182] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.403 [INFO][5192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" HandleID="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.427 [INFO][5192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" HandleID="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-94", "pod":"calico-apiserver-db4fb4c5-2g54x", "timestamp":"2025-01-17 12:02:55.403028879 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.427 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.427 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.427 [INFO][5192] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.430 [INFO][5192] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.438 [INFO][5192] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.446 [INFO][5192] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.450 [INFO][5192] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.454 [INFO][5192] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.454 [INFO][5192] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.457 [INFO][5192] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.463 [INFO][5192] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.474 [INFO][5192] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.4/26] block=192.168.29.0/26 handle="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.474 [INFO][5192] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.4/26] handle="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" host="ip-172-31-18-94" Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.474 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:55.547520 containerd[2012]: 2025-01-17 12:02:55.474 [INFO][5192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.4/26] IPv6=[] ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" HandleID="k8s-pod-network.ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.552422 containerd[2012]: 2025-01-17 12:02:55.478 [INFO][5182] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"calico-apiserver-db4fb4c5-2g54x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15ab67e928a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:55.552422 containerd[2012]: 2025-01-17 12:02:55.479 [INFO][5182] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.4/32] ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.552422 containerd[2012]: 2025-01-17 12:02:55.479 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15ab67e928a ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.552422 containerd[2012]: 2025-01-17 12:02:55.493 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.552422 containerd[2012]: 2025-01-17 12:02:55.498 [INFO][5182] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d", Pod:"calico-apiserver-db4fb4c5-2g54x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15ab67e928a", MAC:"72:3e:8f:8e:92:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:55.552422 containerd[2012]: 2025-01-17 12:02:55.536 [INFO][5182] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d" Namespace="calico-apiserver" Pod="calico-apiserver-db4fb4c5-2g54x" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:02:55.558533 kubelet[3414]: I0117 12:02:55.557204 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sgl59" podStartSLOduration=38.556869036 podStartE2EDuration="38.556869036s" podCreationTimestamp="2025-01-17 12:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:55.550717344 +0000 UTC m=+42.734165889" watchObservedRunningTime="2025-01-17 12:02:55.556869036 +0000 UTC m=+42.740317581" Jan 17 12:02:55.609763 systemd[1]: run-netns-cni\x2dd0dd72f8\x2db031\x2db1bd\x2db02d\x2dc60c5f764e65.mount: Deactivated successfully. Jan 17 12:02:55.652525 containerd[2012]: time="2025-01-17T12:02:55.652287948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:55.652525 containerd[2012]: time="2025-01-17T12:02:55.652446984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:55.652525 containerd[2012]: time="2025-01-17T12:02:55.652508532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:55.653200 containerd[2012]: time="2025-01-17T12:02:55.652716144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:55.723660 systemd[1]: Started cri-containerd-ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d.scope - libcontainer container ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d. Jan 17 12:02:55.801065 containerd[2012]: time="2025-01-17T12:02:55.800882029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db4fb4c5-2g54x,Uid:2f8e24b8-bf29-49fa-8b29-d56d50f12a1a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d\"" Jan 17 12:02:55.961451 systemd-networkd[1917]: cali9f0df4398c7: Gained IPv6LL Jan 17 12:02:56.025577 systemd-networkd[1917]: cali79fa5281821: Gained IPv6LL Jan 17 12:02:56.096688 containerd[2012]: time="2025-01-17T12:02:56.095977115Z" level=info msg="StopPodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\"" Jan 17 12:02:56.218471 systemd-networkd[1917]: caliea7b69308ac: Gained IPv6LL Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.179 [INFO][5271] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.179 [INFO][5271] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" iface="eth0" netns="/var/run/netns/cni-2542a59b-ca82-fb67-de72-89d6784bf2cb" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.181 [INFO][5271] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" iface="eth0" netns="/var/run/netns/cni-2542a59b-ca82-fb67-de72-89d6784bf2cb" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.181 [INFO][5271] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" iface="eth0" netns="/var/run/netns/cni-2542a59b-ca82-fb67-de72-89d6784bf2cb" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.181 [INFO][5271] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.181 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.227 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.227 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.227 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.240 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.240 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.242 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:56.248790 containerd[2012]: 2025-01-17 12:02:56.245 [INFO][5271] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:02:56.251259 containerd[2012]: time="2025-01-17T12:02:56.249176891Z" level=info msg="TearDown network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" successfully" Jan 17 12:02:56.251404 containerd[2012]: time="2025-01-17T12:02:56.251242259Z" level=info msg="StopPodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" returns successfully" Jan 17 12:02:56.253150 containerd[2012]: time="2025-01-17T12:02:56.252713543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7869774c-nt5td,Uid:c15c08ff-5739-4693-9145-35518ef5e967,Namespace:calico-system,Attempt:1,}" Jan 17 12:02:56.257244 systemd[1]: run-netns-cni\x2d2542a59b\x2dca82\x2dfb67\x2dde72\x2d89d6784bf2cb.mount: Deactivated successfully. Jan 17 12:02:56.521769 systemd-networkd[1917]: cali1ee17564c7f: Link UP Jan 17 12:02:56.524577 systemd-networkd[1917]: cali1ee17564c7f: Gained carrier Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.373 [INFO][5284] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0 calico-kube-controllers-6c7869774c- calico-system c15c08ff-5739-4693-9145-35518ef5e967 853 0 2025-01-17 12:02:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c7869774c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-94 calico-kube-controllers-6c7869774c-nt5td eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1ee17564c7f [] []}} ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.374 [INFO][5284] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.436 [INFO][5295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.454 [INFO][5295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000406640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-94", "pod":"calico-kube-controllers-6c7869774c-nt5td", "timestamp":"2025-01-17 12:02:56.436445796 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.455 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.455 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.455 [INFO][5295] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.458 [INFO][5295] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.465 [INFO][5295] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.472 [INFO][5295] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.475 [INFO][5295] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.479 [INFO][5295] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.480 [INFO][5295] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.482 [INFO][5295] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872 Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.490 [INFO][5295] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.504 [INFO][5295] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.5/26] block=192.168.29.0/26 handle="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.505 [INFO][5295] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.5/26] handle="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" host="ip-172-31-18-94" Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.505 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:56.568047 containerd[2012]: 2025-01-17 12:02:56.505 [INFO][5295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.5/26] IPv6=[] ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.570993 containerd[2012]: 2025-01-17 12:02:56.510 [INFO][5284] cni-plugin/k8s.go 386: Populated endpoint ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0", GenerateName:"calico-kube-controllers-6c7869774c-", Namespace:"calico-system", SelfLink:"", UID:"c15c08ff-5739-4693-9145-35518ef5e967", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7869774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"calico-kube-controllers-6c7869774c-nt5td", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ee17564c7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:56.570993 containerd[2012]: 2025-01-17 12:02:56.510 [INFO][5284] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.5/32] ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.570993 containerd[2012]: 2025-01-17 12:02:56.510 [INFO][5284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ee17564c7f ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.570993 containerd[2012]: 2025-01-17 12:02:56.527 [INFO][5284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.570993 containerd[2012]: 2025-01-17 12:02:56.530 [INFO][5284] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0", GenerateName:"calico-kube-controllers-6c7869774c-", Namespace:"calico-system", SelfLink:"", UID:"c15c08ff-5739-4693-9145-35518ef5e967", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7869774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872", Pod:"calico-kube-controllers-6c7869774c-nt5td", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ee17564c7f", MAC:"06:22:3b:ca:c9:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:56.570993 containerd[2012]: 2025-01-17 12:02:56.556 [INFO][5284] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Namespace="calico-system" Pod="calico-kube-controllers-6c7869774c-nt5td" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:02:56.650284 containerd[2012]: time="2025-01-17T12:02:56.646154641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:56.650284 containerd[2012]: time="2025-01-17T12:02:56.646355605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:56.650284 containerd[2012]: time="2025-01-17T12:02:56.646438453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:56.650284 containerd[2012]: time="2025-01-17T12:02:56.647033125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:56.707721 systemd[1]: Started cri-containerd-64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872.scope - libcontainer container 64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872. Jan 17 12:02:56.774291 containerd[2012]: time="2025-01-17T12:02:56.774081974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7869774c-nt5td,Uid:c15c08ff-5739-4693-9145-35518ef5e967,Namespace:calico-system,Attempt:1,} returns sandbox id \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\"" Jan 17 12:02:57.097484 containerd[2012]: time="2025-01-17T12:02:57.096401100Z" level=info msg="StopPodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\"" Jan 17 12:02:57.242849 systemd-networkd[1917]: cali15ab67e928a: Gained IPv6LL Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.182 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.184 [INFO][5366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" iface="eth0" netns="/var/run/netns/cni-eb7dac38-cf05-9707-6e7e-9d432bf16089" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.184 [INFO][5366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" iface="eth0" netns="/var/run/netns/cni-eb7dac38-cf05-9707-6e7e-9d432bf16089" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.185 [INFO][5366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" iface="eth0" netns="/var/run/netns/cni-eb7dac38-cf05-9707-6e7e-9d432bf16089" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.185 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.185 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.228 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.228 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.228 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.240 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.240 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.245 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:57.251435 containerd[2012]: 2025-01-17 12:02:57.248 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:02:57.254939 containerd[2012]: time="2025-01-17T12:02:57.251761608Z" level=info msg="TearDown network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" successfully" Jan 17 12:02:57.254939 containerd[2012]: time="2025-01-17T12:02:57.251802564Z" level=info msg="StopPodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" returns successfully" Jan 17 12:02:57.254939 containerd[2012]: time="2025-01-17T12:02:57.253761288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lfw8l,Uid:bf9520af-e465-4fb3-a051-dd1a6f804ee7,Namespace:kube-system,Attempt:1,}" Jan 17 12:02:57.259306 systemd[1]: run-netns-cni\x2deb7dac38\x2dcf05\x2d9707\x2d6e7e\x2d9d432bf16089.mount: Deactivated successfully. Jan 17 12:02:57.521741 systemd-networkd[1917]: cali451d43471ac: Link UP Jan 17 12:02:57.522687 systemd-networkd[1917]: cali451d43471ac: Gained carrier Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.350 [INFO][5380] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0 coredns-6f6b679f8f- kube-system bf9520af-e465-4fb3-a051-dd1a6f804ee7 861 0 2025-01-17 12:02:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-94 coredns-6f6b679f8f-lfw8l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali451d43471ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.350 [INFO][5380] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.409 [INFO][5390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" HandleID="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.436 [INFO][5390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" HandleID="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-94", "pod":"coredns-6f6b679f8f-lfw8l", "timestamp":"2025-01-17 12:02:57.409678417 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.436 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.436 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.436 [INFO][5390] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.440 [INFO][5390] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.452 [INFO][5390] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.463 [INFO][5390] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.468 [INFO][5390] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.477 [INFO][5390] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.477 [INFO][5390] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.481 [INFO][5390] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010 Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.495 [INFO][5390] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.509 [INFO][5390] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.6/26] block=192.168.29.0/26 handle="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.509 [INFO][5390] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.6/26] handle="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" host="ip-172-31-18-94" Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.509 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:02:57.559078 containerd[2012]: 2025-01-17 12:02:57.509 [INFO][5390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.6/26] IPv6=[] ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" HandleID="k8s-pod-network.826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.560455 containerd[2012]: 2025-01-17 12:02:57.514 [INFO][5380] cni-plugin/k8s.go 386: Populated endpoint ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9520af-e465-4fb3-a051-dd1a6f804ee7", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"coredns-6f6b679f8f-lfw8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali451d43471ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:57.560455 containerd[2012]: 2025-01-17 12:02:57.514 [INFO][5380] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.6/32] ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.560455 containerd[2012]: 2025-01-17 12:02:57.514 [INFO][5380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali451d43471ac ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.560455 containerd[2012]: 2025-01-17 12:02:57.519 [INFO][5380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.560455 containerd[2012]: 2025-01-17 12:02:57.521 [INFO][5380] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9520af-e465-4fb3-a051-dd1a6f804ee7", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010", Pod:"coredns-6f6b679f8f-lfw8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali451d43471ac", MAC:"52:6f:bb:c8:8e:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:02:57.560455 containerd[2012]: 2025-01-17 12:02:57.554 [INFO][5380] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010" Namespace="kube-system" Pod="coredns-6f6b679f8f-lfw8l" WorkloadEndpoint="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:02:57.605779 containerd[2012]: time="2025-01-17T12:02:57.604307954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:57.605779 containerd[2012]: time="2025-01-17T12:02:57.604432262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:57.605779 containerd[2012]: time="2025-01-17T12:02:57.604470206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:57.605779 containerd[2012]: time="2025-01-17T12:02:57.604672778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:57.647465 systemd[1]: Started cri-containerd-826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010.scope - libcontainer container 826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010. Jan 17 12:02:57.743900 containerd[2012]: time="2025-01-17T12:02:57.743818887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lfw8l,Uid:bf9520af-e465-4fb3-a051-dd1a6f804ee7,Namespace:kube-system,Attempt:1,} returns sandbox id \"826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010\"" Jan 17 12:02:57.751702 containerd[2012]: time="2025-01-17T12:02:57.750943767Z" level=info msg="CreateContainer within sandbox \"826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:02:57.800034 containerd[2012]: time="2025-01-17T12:02:57.799863831Z" level=info msg="CreateContainer within sandbox \"826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6f78196eaa9be6871d1d832002b93cbbeb48c50d6d1ebb5b63f315b45d96077\"" Jan 17 12:02:57.803097 containerd[2012]: time="2025-01-17T12:02:57.803031219Z" level=info msg="StartContainer for \"f6f78196eaa9be6871d1d832002b93cbbeb48c50d6d1ebb5b63f315b45d96077\"" Jan 17 12:02:57.880523 systemd[1]: Started cri-containerd-f6f78196eaa9be6871d1d832002b93cbbeb48c50d6d1ebb5b63f315b45d96077.scope - libcontainer container f6f78196eaa9be6871d1d832002b93cbbeb48c50d6d1ebb5b63f315b45d96077. Jan 17 12:02:57.881639 systemd-networkd[1917]: cali1ee17564c7f: Gained IPv6LL Jan 17 12:02:57.966898 containerd[2012]: time="2025-01-17T12:02:57.966162472Z" level=info msg="StartContainer for \"f6f78196eaa9be6871d1d832002b93cbbeb48c50d6d1ebb5b63f315b45d96077\" returns successfully" Jan 17 12:02:58.582452 kubelet[3414]: I0117 12:02:58.582068 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lfw8l" podStartSLOduration=41.582045363 podStartE2EDuration="41.582045363s" podCreationTimestamp="2025-01-17 12:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:58.581405571 +0000 UTC m=+45.764854104" watchObservedRunningTime="2025-01-17 12:02:58.582045363 +0000 UTC m=+45.765493884" Jan 17 12:02:59.007083 containerd[2012]: time="2025-01-17T12:02:59.006905989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:59.011742 containerd[2012]: time="2025-01-17T12:02:59.011312089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:02:59.013970 containerd[2012]: time="2025-01-17T12:02:59.013333717Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:59.020999 containerd[2012]: time="2025-01-17T12:02:59.020927113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:59.024178 containerd[2012]: time="2025-01-17T12:02:59.023856409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.488897118s" Jan 17 12:02:59.024434 containerd[2012]: time="2025-01-17T12:02:59.024394465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:02:59.040765 containerd[2012]: time="2025-01-17T12:02:59.040711129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:02:59.043661 containerd[2012]: time="2025-01-17T12:02:59.043590493Z" level=info msg="CreateContainer within sandbox \"15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:02:59.070937 containerd[2012]: time="2025-01-17T12:02:59.070161901Z" level=info msg="CreateContainer within sandbox \"15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"548d3ba76aa3cf88d58b2da11f0f6c88fa15f66fc2f877cb748d25b4560632bf\"" Jan 17 12:02:59.075251 containerd[2012]: time="2025-01-17T12:02:59.075189361Z" level=info msg="StartContainer for \"548d3ba76aa3cf88d58b2da11f0f6c88fa15f66fc2f877cb748d25b4560632bf\"" Jan 17 12:02:59.082165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985400509.mount: Deactivated successfully. Jan 17 12:02:59.154476 systemd[1]: Started cri-containerd-548d3ba76aa3cf88d58b2da11f0f6c88fa15f66fc2f877cb748d25b4560632bf.scope - libcontainer container 548d3ba76aa3cf88d58b2da11f0f6c88fa15f66fc2f877cb748d25b4560632bf. Jan 17 12:02:59.228161 containerd[2012]: time="2025-01-17T12:02:59.228070082Z" level=info msg="StartContainer for \"548d3ba76aa3cf88d58b2da11f0f6c88fa15f66fc2f877cb748d25b4560632bf\" returns successfully" Jan 17 12:02:59.353395 systemd-networkd[1917]: cali451d43471ac: Gained IPv6LL Jan 17 12:03:00.119008 systemd[1]: run-containerd-runc-k8s.io-c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d-runc.MUZ887.mount: Deactivated successfully. Jan 17 12:03:00.292309 kubelet[3414]: I0117 12:03:00.291846 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-db4fb4c5-qr4qm" podStartSLOduration=25.78537761 podStartE2EDuration="30.291823156s" podCreationTimestamp="2025-01-17 12:02:30 +0000 UTC" firstStartedPulling="2025-01-17 12:02:54.533941187 +0000 UTC m=+41.717389708" lastFinishedPulling="2025-01-17 12:02:59.040386697 +0000 UTC m=+46.223835254" observedRunningTime="2025-01-17 12:02:59.590576608 +0000 UTC m=+46.774025153" watchObservedRunningTime="2025-01-17 12:03:00.291823156 +0000 UTC m=+47.475271677" Jan 17 12:03:00.566715 kubelet[3414]: I0117 12:03:00.566219 3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:01.294710 systemd[1]: Started sshd@9-172.31.18.94:22-139.178.68.195:59376.service - OpenSSH per-connection server daemon (139.178.68.195:59376). Jan 17 12:03:01.530415 sshd[5581]: Accepted publickey for core from 139.178.68.195 port 59376 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:01.537598 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:01.558652 systemd-logind[1998]: New session 10 of user core. Jan 17 12:03:01.564478 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:03:01.572842 containerd[2012]: time="2025-01-17T12:03:01.571324122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:01.576952 containerd[2012]: time="2025-01-17T12:03:01.575330370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:03:01.584255 containerd[2012]: time="2025-01-17T12:03:01.582642054Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:01.599134 containerd[2012]: time="2025-01-17T12:03:01.598227138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:01.599649 containerd[2012]: time="2025-01-17T12:03:01.599594538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.556458629s" Jan 17 12:03:01.599831 containerd[2012]: time="2025-01-17T12:03:01.599799534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:03:01.625338 containerd[2012]: time="2025-01-17T12:03:01.625278210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:03:01.648817 containerd[2012]: time="2025-01-17T12:03:01.648762282Z" level=info msg="CreateContainer within sandbox \"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:03:01.697042 containerd[2012]: time="2025-01-17T12:03:01.696971382Z" level=info msg="CreateContainer within sandbox \"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3a4c9c63fca73061b69f1832a6e6871d0a15242b81635e386957a115f5ef37bf\"" Jan 17 12:03:01.698652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302483987.mount: Deactivated successfully. Jan 17 12:03:01.702885 containerd[2012]: time="2025-01-17T12:03:01.702117943Z" level=info msg="StartContainer for \"3a4c9c63fca73061b69f1832a6e6871d0a15242b81635e386957a115f5ef37bf\"" Jan 17 12:03:01.831201 systemd[1]: run-containerd-runc-k8s.io-3a4c9c63fca73061b69f1832a6e6871d0a15242b81635e386957a115f5ef37bf-runc.yx3vuu.mount: Deactivated successfully. Jan 17 12:03:01.861319 systemd[1]: Started cri-containerd-3a4c9c63fca73061b69f1832a6e6871d0a15242b81635e386957a115f5ef37bf.scope - libcontainer container 3a4c9c63fca73061b69f1832a6e6871d0a15242b81635e386957a115f5ef37bf. Jan 17 12:03:02.050016 containerd[2012]: time="2025-01-17T12:03:02.049394332Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:02.053536 containerd[2012]: time="2025-01-17T12:03:02.053060500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:03:02.058189 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:02.068948 containerd[2012]: time="2025-01-17T12:03:02.067268992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 441.72083ms" Jan 17 12:03:02.068948 containerd[2012]: time="2025-01-17T12:03:02.067344868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:03:02.073056 systemd[1]: sshd@9-172.31.18.94:22-139.178.68.195:59376.service: Deactivated successfully. Jan 17 12:03:02.081546 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:03:02.085190 containerd[2012]: time="2025-01-17T12:03:02.082753936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:03:02.091334 systemd-logind[1998]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:03:02.098887 systemd-logind[1998]: Removed session 10. Jan 17 12:03:02.099826 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.29.0:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.29.0:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 8 vxlan.calico [fe80::64bd:2dff:fe56:35cc%4]:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 9 cali9f0df4398c7 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 10 cali79fa5281821 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 11 caliea7b69308ac [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 12 cali15ab67e928a [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 13 cali1ee17564c7f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:03:02.103197 ntpd[1992]: 17 Jan 12:03:02 ntpd[1992]: Listen normally on 14 cali451d43471ac [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:03:02.100001 ntpd[1992]: Listen normally on 8 vxlan.calico [fe80::64bd:2dff:fe56:35cc%4]:123 Jan 17 12:03:02.100088 ntpd[1992]: Listen normally on 9 cali9f0df4398c7 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:03:02.100186 ntpd[1992]: Listen normally on 10 cali79fa5281821 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:03:02.100258 ntpd[1992]: Listen normally on 11 caliea7b69308ac [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:03:02.100345 ntpd[1992]: Listen normally on 12 cali15ab67e928a [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:03:02.100417 ntpd[1992]: Listen normally on 13 cali1ee17564c7f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:03:02.100485 ntpd[1992]: Listen normally on 14 cali451d43471ac [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:03:02.107732 containerd[2012]: time="2025-01-17T12:03:02.107271701Z" level=info msg="CreateContainer within sandbox \"ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:03:02.150253 containerd[2012]: time="2025-01-17T12:03:02.150178193Z" level=info msg="StartContainer for \"3a4c9c63fca73061b69f1832a6e6871d0a15242b81635e386957a115f5ef37bf\" returns successfully" Jan 17 12:03:02.153087 containerd[2012]: time="2025-01-17T12:03:02.152427905Z" level=info msg="CreateContainer within sandbox \"ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47e848b268455448dce0662f08f489b3eddbae9e8cddb8e80c84847710bcf4c9\"" Jan 17 12:03:02.156732 containerd[2012]: time="2025-01-17T12:03:02.155915969Z" level=info msg="StartContainer for \"47e848b268455448dce0662f08f489b3eddbae9e8cddb8e80c84847710bcf4c9\"" Jan 17 12:03:02.248935 systemd[1]: Started cri-containerd-47e848b268455448dce0662f08f489b3eddbae9e8cddb8e80c84847710bcf4c9.scope - libcontainer container 47e848b268455448dce0662f08f489b3eddbae9e8cddb8e80c84847710bcf4c9. Jan 17 12:03:02.429857 containerd[2012]: time="2025-01-17T12:03:02.429593982Z" level=info msg="StartContainer for \"47e848b268455448dce0662f08f489b3eddbae9e8cddb8e80c84847710bcf4c9\" returns successfully" Jan 17 12:03:03.639191 kubelet[3414]: I0117 12:03:03.635435 3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:04.813250 containerd[2012]: time="2025-01-17T12:03:04.813176662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:04.815852 containerd[2012]: time="2025-01-17T12:03:04.815789926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:03:04.817341 containerd[2012]: time="2025-01-17T12:03:04.817283722Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:04.822387 containerd[2012]: time="2025-01-17T12:03:04.822319498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:04.824237 containerd[2012]: time="2025-01-17T12:03:04.824179342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.731179662s" Jan 17 12:03:04.824395 containerd[2012]: time="2025-01-17T12:03:04.824239150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:03:04.827224 containerd[2012]: time="2025-01-17T12:03:04.826795834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:03:04.856797 containerd[2012]: time="2025-01-17T12:03:04.856733002Z" level=info msg="CreateContainer within sandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:03:04.890178 containerd[2012]: time="2025-01-17T12:03:04.890023114Z" level=info msg="CreateContainer within sandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\"" Jan 17 12:03:04.893915 containerd[2012]: time="2025-01-17T12:03:04.893847658Z" level=info msg="StartContainer for \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\"" Jan 17 12:03:04.975058 systemd[1]: Started cri-containerd-ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2.scope - libcontainer container ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2. Jan 17 12:03:05.048207 containerd[2012]: time="2025-01-17T12:03:05.048059323Z" level=info msg="StartContainer for \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\" returns successfully" Jan 17 12:03:05.673377 kubelet[3414]: I0117 12:03:05.672340 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-db4fb4c5-2g54x" podStartSLOduration=29.404742183 podStartE2EDuration="35.672296026s" podCreationTimestamp="2025-01-17 12:02:30 +0000 UTC" firstStartedPulling="2025-01-17 12:02:55.805185505 +0000 UTC m=+42.988634014" lastFinishedPulling="2025-01-17 12:03:02.072739264 +0000 UTC m=+49.256187857" observedRunningTime="2025-01-17 12:03:02.677603671 +0000 UTC m=+49.861052204" watchObservedRunningTime="2025-01-17 12:03:05.672296026 +0000 UTC m=+52.855744547" Jan 17 12:03:05.757798 kubelet[3414]: I0117 12:03:05.757655 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c7869774c-nt5td" podStartSLOduration=28.708996147 podStartE2EDuration="36.757607303s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="2025-01-17 12:02:56.777699986 +0000 UTC m=+43.961148507" lastFinishedPulling="2025-01-17 12:03:04.826311142 +0000 UTC m=+52.009759663" observedRunningTime="2025-01-17 12:03:05.674456362 +0000 UTC m=+52.857904907" watchObservedRunningTime="2025-01-17 12:03:05.757607303 +0000 UTC m=+52.941055848" Jan 17 12:03:06.329162 containerd[2012]: time="2025-01-17T12:03:06.329023149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:06.331244 containerd[2012]: time="2025-01-17T12:03:06.331058325Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:06.331244 containerd[2012]: time="2025-01-17T12:03:06.331178878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:03:06.335511 containerd[2012]: time="2025-01-17T12:03:06.335395042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:06.337136 containerd[2012]: time="2025-01-17T12:03:06.336925942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.510066556s" Jan 17 12:03:06.337136 containerd[2012]: time="2025-01-17T12:03:06.336983098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:03:06.342401 containerd[2012]: time="2025-01-17T12:03:06.342332110Z" level=info msg="CreateContainer within sandbox \"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:03:06.365398 containerd[2012]: time="2025-01-17T12:03:06.365339494Z" level=info msg="CreateContainer within sandbox \"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7490383bb741fd6a73ad7c4249d4874d22810aa377ebf4d849282c64bc510842\"" Jan 17 12:03:06.367224 containerd[2012]: time="2025-01-17T12:03:06.367154794Z" level=info msg="StartContainer for \"7490383bb741fd6a73ad7c4249d4874d22810aa377ebf4d849282c64bc510842\"" Jan 17 12:03:06.438427 systemd[1]: Started cri-containerd-7490383bb741fd6a73ad7c4249d4874d22810aa377ebf4d849282c64bc510842.scope - libcontainer container 7490383bb741fd6a73ad7c4249d4874d22810aa377ebf4d849282c64bc510842. Jan 17 12:03:06.495201 containerd[2012]: time="2025-01-17T12:03:06.494986630Z" level=info msg="StartContainer for \"7490383bb741fd6a73ad7c4249d4874d22810aa377ebf4d849282c64bc510842\" returns successfully" Jan 17 12:03:07.098733 systemd[1]: Started sshd@10-172.31.18.94:22-139.178.68.195:37148.service - OpenSSH per-connection server daemon (139.178.68.195:37148). Jan 17 12:03:07.280058 kubelet[3414]: I0117 12:03:07.279967 3414 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:03:07.280058 kubelet[3414]: I0117 12:03:07.280035 3414 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:03:07.289811 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 37148 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:07.294618 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:07.314617 systemd-logind[1998]: New session 11 of user core. Jan 17 12:03:07.320738 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:03:07.586068 sshd[5785]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:07.592776 systemd[1]: sshd@10-172.31.18.94:22-139.178.68.195:37148.service: Deactivated successfully. Jan 17 12:03:07.597326 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:03:07.598890 systemd-logind[1998]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:03:07.600736 systemd-logind[1998]: Removed session 11. Jan 17 12:03:12.626652 systemd[1]: Started sshd@11-172.31.18.94:22-139.178.68.195:37152.service - OpenSSH per-connection server daemon (139.178.68.195:37152). Jan 17 12:03:12.806031 sshd[5800]: Accepted publickey for core from 139.178.68.195 port 37152 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:12.808746 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:12.817706 systemd-logind[1998]: New session 12 of user core. Jan 17 12:03:12.825379 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:03:13.078471 sshd[5800]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:13.085510 systemd-logind[1998]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:03:13.086271 systemd[1]: sshd@11-172.31.18.94:22-139.178.68.195:37152.service: Deactivated successfully. Jan 17 12:03:13.092089 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:03:13.094737 systemd-logind[1998]: Removed session 12. Jan 17 12:03:13.104210 containerd[2012]: time="2025-01-17T12:03:13.103681083Z" level=info msg="StopPodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\"" Jan 17 12:03:13.120710 systemd[1]: Started sshd@12-172.31.18.94:22-139.178.68.195:37168.service - OpenSSH per-connection server daemon (139.178.68.195:37168). Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.192 [WARNING][5826] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0", GenerateName:"calico-kube-controllers-6c7869774c-", Namespace:"calico-system", SelfLink:"", UID:"c15c08ff-5739-4693-9145-35518ef5e967", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7869774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872", Pod:"calico-kube-controllers-6c7869774c-nt5td", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ee17564c7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.193 [INFO][5826] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.193 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" iface="eth0" netns="" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.193 [INFO][5826] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.193 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.241 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.241 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.241 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.253 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.254 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.257 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:13.265776 containerd[2012]: 2025-01-17 12:03:13.260 [INFO][5826] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.268496 containerd[2012]: time="2025-01-17T12:03:13.265818736Z" level=info msg="TearDown network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" successfully" Jan 17 12:03:13.268496 containerd[2012]: time="2025-01-17T12:03:13.265865476Z" level=info msg="StopPodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" returns successfully" Jan 17 12:03:13.268496 containerd[2012]: time="2025-01-17T12:03:13.267839260Z" level=info msg="RemovePodSandbox for \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\"" Jan 17 12:03:13.268496 containerd[2012]: time="2025-01-17T12:03:13.267892768Z" level=info msg="Forcibly stopping sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\"" Jan 17 12:03:13.312720 sshd[5822]: Accepted publickey for core from 139.178.68.195 port 37168 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:13.316034 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:13.331217 systemd-logind[1998]: New session 13 of user core. Jan 17 12:03:13.335784 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.373 [WARNING][5854] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0", GenerateName:"calico-kube-controllers-6c7869774c-", Namespace:"calico-system", SelfLink:"", UID:"c15c08ff-5739-4693-9145-35518ef5e967", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7869774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872", Pod:"calico-kube-controllers-6c7869774c-nt5td", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ee17564c7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.373 [INFO][5854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.373 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" iface="eth0" netns="" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.373 [INFO][5854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.373 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.415 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.415 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.415 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.430 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.430 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" HandleID="k8s-pod-network.ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.432 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:13.443236 containerd[2012]: 2025-01-17 12:03:13.439 [INFO][5854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54" Jan 17 12:03:13.445353 containerd[2012]: time="2025-01-17T12:03:13.444211205Z" level=info msg="TearDown network for sandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" successfully" Jan 17 12:03:13.449849 containerd[2012]: time="2025-01-17T12:03:13.449781149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:13.450416 containerd[2012]: time="2025-01-17T12:03:13.450207989Z" level=info msg="RemovePodSandbox \"ae57dda1796298f94567e8b170a4b15d8d4d72b962451e90d21071bcaefd3a54\" returns successfully" Jan 17 12:03:13.452135 containerd[2012]: time="2025-01-17T12:03:13.451756493Z" level=info msg="StopPodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\"" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.542 [WARNING][5884] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"29853ccc-52db-41c3-8f83-89ba39b0f309", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e", Pod:"csi-node-driver-7t526", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea7b69308ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.542 [INFO][5884] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.542 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" iface="eth0" netns="" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.542 [INFO][5884] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.543 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.628 [INFO][5892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.629 [INFO][5892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.629 [INFO][5892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.641 [WARNING][5892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.641 [INFO][5892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.644 [INFO][5892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:13.654891 containerd[2012]: 2025-01-17 12:03:13.647 [INFO][5884] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:13.658732 containerd[2012]: time="2025-01-17T12:03:13.655295754Z" level=info msg="TearDown network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" successfully" Jan 17 12:03:13.658732 containerd[2012]: time="2025-01-17T12:03:13.655336734Z" level=info msg="StopPodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" returns successfully" Jan 17 12:03:13.658732 containerd[2012]: time="2025-01-17T12:03:13.657410058Z" level=info msg="RemovePodSandbox for \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\"" Jan 17 12:03:13.658732 containerd[2012]: time="2025-01-17T12:03:13.657461442Z" level=info msg="Forcibly stopping sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\"" Jan 17 12:03:13.786301 sshd[5822]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:13.800888 systemd-logind[1998]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:03:13.802399 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:03:13.813375 systemd[1]: sshd@12-172.31.18.94:22-139.178.68.195:37168.service: Deactivated successfully. Jan 17 12:03:13.858735 systemd[1]: Started sshd@13-172.31.18.94:22-139.178.68.195:37182.service - OpenSSH per-connection server daemon (139.178.68.195:37182). Jan 17 12:03:13.861580 systemd-logind[1998]: Removed session 13. Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.835 [WARNING][5910] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"29853ccc-52db-41c3-8f83-89ba39b0f309", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"41fc0d29512b1cfb2f881aaa8ad8981325ef1e681547f293dcd101f1d1f7180e", Pod:"csi-node-driver-7t526", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea7b69308ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.841 [INFO][5910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.841 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" iface="eth0" netns="" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.842 [INFO][5910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.842 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.971 [INFO][5920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.974 [INFO][5920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:13.974 [INFO][5920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:14.004 [WARNING][5920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:14.005 [INFO][5920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" HandleID="k8s-pod-network.d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Workload="ip--172--31--18--94-k8s-csi--node--driver--7t526-eth0" Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:14.011 [INFO][5920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:14.024003 containerd[2012]: 2025-01-17 12:03:14.016 [INFO][5910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048" Jan 17 12:03:14.025777 containerd[2012]: time="2025-01-17T12:03:14.024041032Z" level=info msg="TearDown network for sandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" successfully" Jan 17 12:03:14.034774 containerd[2012]: time="2025-01-17T12:03:14.034334020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:14.034774 containerd[2012]: time="2025-01-17T12:03:14.034499212Z" level=info msg="RemovePodSandbox \"d0a8df385ca9dde3da13d2befe8d2ce66e187e6a3821194e82088cc0c995a048\" returns successfully" Jan 17 12:03:14.037550 containerd[2012]: time="2025-01-17T12:03:14.035295820Z" level=info msg="StopPodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\"" Jan 17 12:03:14.092512 sshd[5921]: Accepted publickey for core from 139.178.68.195 port 37182 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:14.098409 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:14.109819 systemd-logind[1998]: New session 14 of user core. Jan 17 12:03:14.118443 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.121 [WARNING][5947] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d", Pod:"calico-apiserver-db4fb4c5-2g54x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15ab67e928a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.122 [INFO][5947] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.122 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" iface="eth0" netns="" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.122 [INFO][5947] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.122 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.166 [INFO][5953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.167 [INFO][5953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.167 [INFO][5953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.181 [WARNING][5953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.181 [INFO][5953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.183 [INFO][5953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:14.188733 containerd[2012]: 2025-01-17 12:03:14.186 [INFO][5947] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.190262 containerd[2012]: time="2025-01-17T12:03:14.188943125Z" level=info msg="TearDown network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" successfully" Jan 17 12:03:14.190262 containerd[2012]: time="2025-01-17T12:03:14.188983421Z" level=info msg="StopPodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" returns successfully" Jan 17 12:03:14.190477 containerd[2012]: time="2025-01-17T12:03:14.190334861Z" level=info msg="RemovePodSandbox for \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\"" Jan 17 12:03:14.190477 containerd[2012]: time="2025-01-17T12:03:14.190385069Z" level=info msg="Forcibly stopping sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\"" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.272 [WARNING][5972] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f8e24b8-bf29-49fa-8b29-d56d50f12a1a", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"ee0a44f9541e9f29c48e3e911cf6d6732404edbd13c48277ce404e6206b17b1d", Pod:"calico-apiserver-db4fb4c5-2g54x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15ab67e928a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.273 [INFO][5972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.273 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" iface="eth0" netns="" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.273 [INFO][5972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.273 [INFO][5972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.373 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.376 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.376 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.394 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.395 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" HandleID="k8s-pod-network.8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--2g54x-eth0" Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.400 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:14.411307 containerd[2012]: 2025-01-17 12:03:14.406 [INFO][5972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760" Jan 17 12:03:14.411307 containerd[2012]: time="2025-01-17T12:03:14.410666286Z" level=info msg="TearDown network for sandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" successfully" Jan 17 12:03:14.421079 containerd[2012]: time="2025-01-17T12:03:14.420822150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:14.421079 containerd[2012]: time="2025-01-17T12:03:14.420921894Z" level=info msg="RemovePodSandbox \"8195b771af309e81e83b4266f52d6f8fddf52b1896cdaaf2740afef4eb82c760\" returns successfully" Jan 17 12:03:14.423496 containerd[2012]: time="2025-01-17T12:03:14.423364134Z" level=info msg="StopPodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\"" Jan 17 12:03:14.483455 sshd[5921]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:14.493769 systemd[1]: sshd@13-172.31.18.94:22-139.178.68.195:37182.service: Deactivated successfully. Jan 17 12:03:14.500850 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:03:14.505553 systemd-logind[1998]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:03:14.508632 systemd-logind[1998]: Removed session 14. Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.529 [WARNING][6003] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5fdb21f1-f207-473a-b571-9a91d733fe50", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f", Pod:"coredns-6f6b679f8f-sgl59", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f0df4398c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.529 [INFO][6003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.529 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" iface="eth0" netns="" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.529 [INFO][6003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.529 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.567 [INFO][6012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.567 [INFO][6012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.567 [INFO][6012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.582 [WARNING][6012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.582 [INFO][6012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.585 [INFO][6012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:14.590713 containerd[2012]: 2025-01-17 12:03:14.588 [INFO][6003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.592273 containerd[2012]: time="2025-01-17T12:03:14.590734459Z" level=info msg="TearDown network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" successfully" Jan 17 12:03:14.592273 containerd[2012]: time="2025-01-17T12:03:14.590771155Z" level=info msg="StopPodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" returns successfully" Jan 17 12:03:14.592273 containerd[2012]: time="2025-01-17T12:03:14.592185283Z" level=info msg="RemovePodSandbox for \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\"" Jan 17 12:03:14.592273 containerd[2012]: time="2025-01-17T12:03:14.592234783Z" level=info msg="Forcibly stopping sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\"" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.661 [WARNING][6031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5fdb21f1-f207-473a-b571-9a91d733fe50", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"2025399a5b0f2fee9594fe1d0c01df5505a0d67d376a5308c2c30b9dcd9bcf4f", Pod:"coredns-6f6b679f8f-sgl59", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f0df4398c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.661 [INFO][6031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.661 [INFO][6031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" iface="eth0" netns="" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.662 [INFO][6031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.662 [INFO][6031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.712 [INFO][6037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.713 [INFO][6037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.713 [INFO][6037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.726 [WARNING][6037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.726 [INFO][6037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" HandleID="k8s-pod-network.cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--sgl59-eth0" Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.729 [INFO][6037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:14.734566 containerd[2012]: 2025-01-17 12:03:14.731 [INFO][6031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908" Jan 17 12:03:14.734566 containerd[2012]: time="2025-01-17T12:03:14.734460007Z" level=info msg="TearDown network for sandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" successfully" Jan 17 12:03:14.742647 containerd[2012]: time="2025-01-17T12:03:14.742549135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:14.742919 containerd[2012]: time="2025-01-17T12:03:14.742654219Z" level=info msg="RemovePodSandbox \"cab78efb21b39e1f869bca25e613171ba878b6926b41428b1d8da21a94a8d908\" returns successfully" Jan 17 12:03:14.743970 containerd[2012]: time="2025-01-17T12:03:14.743907031Z" level=info msg="StopPodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\"" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.838 [WARNING][6055] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020", Pod:"calico-apiserver-db4fb4c5-qr4qm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79fa5281821", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.838 [INFO][6055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.838 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" iface="eth0" netns="" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.838 [INFO][6055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.838 [INFO][6055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.886 [INFO][6061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.886 [INFO][6061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.886 [INFO][6061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.900 [WARNING][6061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.900 [INFO][6061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.903 [INFO][6061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:14.910735 containerd[2012]: 2025-01-17 12:03:14.908 [INFO][6055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:14.912234 containerd[2012]: time="2025-01-17T12:03:14.911665916Z" level=info msg="TearDown network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" successfully" Jan 17 12:03:14.912234 containerd[2012]: time="2025-01-17T12:03:14.911729012Z" level=info msg="StopPodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" returns successfully" Jan 17 12:03:14.912633 containerd[2012]: time="2025-01-17T12:03:14.912569552Z" level=info msg="RemovePodSandbox for \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\"" Jan 17 12:03:14.912720 containerd[2012]: time="2025-01-17T12:03:14.912636416Z" level=info msg="Forcibly stopping sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\"" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:14.980 [WARNING][6079] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0", GenerateName:"calico-apiserver-db4fb4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1cc8dfd-5e75-4e6f-8077-56dce433bbfe", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db4fb4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"15337170a778d65381db21c5a7fe19c7fb07e11ebbfaadd9db9957eec8675020", Pod:"calico-apiserver-db4fb4c5-qr4qm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79fa5281821", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:14.981 [INFO][6079] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:14.981 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" iface="eth0" netns="" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:14.981 [INFO][6079] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:14.981 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.027 [INFO][6086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.027 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.027 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.040 [WARNING][6086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.040 [INFO][6086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" HandleID="k8s-pod-network.88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Workload="ip--172--31--18--94-k8s-calico--apiserver--db4fb4c5--qr4qm-eth0" Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.042 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:15.048455 containerd[2012]: 2025-01-17 12:03:15.045 [INFO][6079] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf" Jan 17 12:03:15.048455 containerd[2012]: time="2025-01-17T12:03:15.048175829Z" level=info msg="TearDown network for sandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" successfully" Jan 17 12:03:15.054521 containerd[2012]: time="2025-01-17T12:03:15.054438473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:15.055965 containerd[2012]: time="2025-01-17T12:03:15.054563381Z" level=info msg="RemovePodSandbox \"88a8f1bc0e5d6657339907b4a43f9c112202ccc72e52d345083cbdab11ae39cf\" returns successfully" Jan 17 12:03:15.055965 containerd[2012]: time="2025-01-17T12:03:15.055452293Z" level=info msg="StopPodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\"" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.128 [WARNING][6104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9520af-e465-4fb3-a051-dd1a6f804ee7", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010", Pod:"coredns-6f6b679f8f-lfw8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali451d43471ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.129 [INFO][6104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.129 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" iface="eth0" netns="" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.129 [INFO][6104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.129 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.171 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.171 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.171 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.190 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.190 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.194 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:15.202625 containerd[2012]: 2025-01-17 12:03:15.199 [INFO][6104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.204571 containerd[2012]: time="2025-01-17T12:03:15.202679874Z" level=info msg="TearDown network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" successfully" Jan 17 12:03:15.204571 containerd[2012]: time="2025-01-17T12:03:15.202722186Z" level=info msg="StopPodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" returns successfully" Jan 17 12:03:15.204571 containerd[2012]: time="2025-01-17T12:03:15.203421678Z" level=info msg="RemovePodSandbox for \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\"" Jan 17 12:03:15.204571 containerd[2012]: time="2025-01-17T12:03:15.203471238Z" level=info msg="Forcibly stopping sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\"" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.269 [WARNING][6128] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9520af-e465-4fb3-a051-dd1a6f804ee7", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"826482b71d9521587e774761f332a2b2a8be28f5ae8b99fc8d38ce151dbe2010", Pod:"coredns-6f6b679f8f-lfw8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali451d43471ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.270 [INFO][6128] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.270 [INFO][6128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" iface="eth0" netns="" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.270 [INFO][6128] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.270 [INFO][6128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.313 [INFO][6134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.313 [INFO][6134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.313 [INFO][6134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.327 [WARNING][6134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.327 [INFO][6134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" HandleID="k8s-pod-network.1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Workload="ip--172--31--18--94-k8s-coredns--6f6b679f8f--lfw8l-eth0" Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.330 [INFO][6134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:15.335492 containerd[2012]: 2025-01-17 12:03:15.333 [INFO][6128] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826" Jan 17 12:03:15.335492 containerd[2012]: time="2025-01-17T12:03:15.335442414Z" level=info msg="TearDown network for sandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" successfully" Jan 17 12:03:15.343951 containerd[2012]: time="2025-01-17T12:03:15.343696878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:15.343951 containerd[2012]: time="2025-01-17T12:03:15.343790634Z" level=info msg="RemovePodSandbox \"1fa4355c2c862b8ce5572d68abc8703b12abf5e5c2ab917fbb46356600ab5826\" returns successfully" Jan 17 12:03:15.661520 kubelet[3414]: I0117 12:03:15.661243 3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:15.702223 kubelet[3414]: I0117 12:03:15.701433 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7t526" podStartSLOduration=35.437014256 podStartE2EDuration="46.701410292s" podCreationTimestamp="2025-01-17 12:02:29 +0000 UTC" firstStartedPulling="2025-01-17 12:02:55.074223754 +0000 UTC m=+42.257672287" lastFinishedPulling="2025-01-17 12:03:06.338619802 +0000 UTC m=+53.522068323" observedRunningTime="2025-01-17 12:03:06.674604239 +0000 UTC m=+53.858052772" watchObservedRunningTime="2025-01-17 12:03:15.701410292 +0000 UTC m=+62.884858813" Jan 17 12:03:19.525646 systemd[1]: Started sshd@14-172.31.18.94:22-139.178.68.195:57344.service - OpenSSH per-connection server daemon (139.178.68.195:57344). Jan 17 12:03:19.710237 sshd[6149]: Accepted publickey for core from 139.178.68.195 port 57344 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:19.712928 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:19.721458 systemd-logind[1998]: New session 15 of user core. Jan 17 12:03:19.728385 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:03:20.086965 sshd[6149]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:20.093361 systemd[1]: run-containerd-runc-k8s.io-ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2-runc.TUxLMr.mount: Deactivated successfully. Jan 17 12:03:20.097541 systemd[1]: sshd@14-172.31.18.94:22-139.178.68.195:57344.service: Deactivated successfully. Jan 17 12:03:20.102653 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:03:20.106836 systemd-logind[1998]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:03:20.112878 systemd-logind[1998]: Removed session 15. Jan 17 12:03:25.124690 systemd[1]: Started sshd@15-172.31.18.94:22-139.178.68.195:41172.service - OpenSSH per-connection server daemon (139.178.68.195:41172). Jan 17 12:03:25.309956 sshd[6186]: Accepted publickey for core from 139.178.68.195 port 41172 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:25.312780 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:25.321233 systemd-logind[1998]: New session 16 of user core. Jan 17 12:03:25.329378 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:03:25.440985 kubelet[3414]: I0117 12:03:25.440835 3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:25.633035 sshd[6186]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:25.639634 systemd-logind[1998]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:03:25.639723 systemd[1]: sshd@15-172.31.18.94:22-139.178.68.195:41172.service: Deactivated successfully. Jan 17 12:03:25.644333 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:03:25.646709 systemd-logind[1998]: Removed session 16. Jan 17 12:03:30.676388 systemd[1]: Started sshd@16-172.31.18.94:22-139.178.68.195:41180.service - OpenSSH per-connection server daemon (139.178.68.195:41180). Jan 17 12:03:30.866442 sshd[6225]: Accepted publickey for core from 139.178.68.195 port 41180 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:30.870139 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:30.885443 systemd-logind[1998]: New session 17 of user core. Jan 17 12:03:30.891394 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:03:31.173421 sshd[6225]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:31.180002 systemd[1]: sshd@16-172.31.18.94:22-139.178.68.195:41180.service: Deactivated successfully. Jan 17 12:03:31.187705 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:03:31.191676 systemd-logind[1998]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:03:31.195347 systemd-logind[1998]: Removed session 17. Jan 17 12:03:32.008303 containerd[2012]: time="2025-01-17T12:03:32.007611789Z" level=info msg="StopContainer for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" with timeout 300 (s)" Jan 17 12:03:32.010224 containerd[2012]: time="2025-01-17T12:03:32.009387693Z" level=info msg="Stop container \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" with signal terminated" Jan 17 12:03:32.399498 containerd[2012]: time="2025-01-17T12:03:32.399435011Z" level=info msg="StopContainer for \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\" with timeout 30 (s)" Jan 17 12:03:32.400214 containerd[2012]: time="2025-01-17T12:03:32.400132799Z" level=info msg="Stop container \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\" with signal terminated" Jan 17 12:03:32.428658 systemd[1]: cri-containerd-ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2.scope: Deactivated successfully. Jan 17 12:03:32.501810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2-rootfs.mount: Deactivated successfully. Jan 17 12:03:32.523233 containerd[2012]: time="2025-01-17T12:03:32.523079784Z" level=info msg="shim disconnected" id=ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2 namespace=k8s.io Jan 17 12:03:32.523233 containerd[2012]: time="2025-01-17T12:03:32.523276044Z" level=warning msg="cleaning up after shim disconnected" id=ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2 namespace=k8s.io Jan 17 12:03:32.523233 containerd[2012]: time="2025-01-17T12:03:32.523301352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:32.611814 containerd[2012]: time="2025-01-17T12:03:32.611714184Z" level=info msg="StopContainer for \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\" returns successfully" Jan 17 12:03:32.612768 containerd[2012]: time="2025-01-17T12:03:32.612637020Z" level=info msg="StopPodSandbox for \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\"" Jan 17 12:03:32.612768 containerd[2012]: time="2025-01-17T12:03:32.612705792Z" level=info msg="Container to stop \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:32.620800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872-shm.mount: Deactivated successfully. Jan 17 12:03:32.638986 systemd[1]: cri-containerd-64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872.scope: Deactivated successfully. Jan 17 12:03:32.696514 containerd[2012]: time="2025-01-17T12:03:32.696330360Z" level=info msg="shim disconnected" id=64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872 namespace=k8s.io Jan 17 12:03:32.696514 containerd[2012]: time="2025-01-17T12:03:32.696413568Z" level=warning msg="cleaning up after shim disconnected" id=64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872 namespace=k8s.io Jan 17 12:03:32.696514 containerd[2012]: time="2025-01-17T12:03:32.696436404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:32.710404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872-rootfs.mount: Deactivated successfully. Jan 17 12:03:32.795704 kubelet[3414]: I0117 12:03:32.795646 3414 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:03:32.874460 systemd-networkd[1917]: cali1ee17564c7f: Link DOWN Jan 17 12:03:32.874480 systemd-networkd[1917]: cali1ee17564c7f: Lost carrier Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.870 [INFO][6320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.871 [INFO][6320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" iface="eth0" netns="/var/run/netns/cni-5495c8a3-966f-bda3-df34-b89dbd93f950" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.871 [INFO][6320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" iface="eth0" netns="/var/run/netns/cni-5495c8a3-966f-bda3-df34-b89dbd93f950" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.882 [INFO][6320] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" after=11.06124ms iface="eth0" netns="/var/run/netns/cni-5495c8a3-966f-bda3-df34-b89dbd93f950" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.882 [INFO][6320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.882 [INFO][6320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.952 [INFO][6326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.953 [INFO][6326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:32.953 [INFO][6326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:33.035 [INFO][6326] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:33.035 [INFO][6326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:33.037 [INFO][6326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:33.043524 containerd[2012]: 2025-01-17 12:03:33.041 [INFO][6320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:03:33.047410 containerd[2012]: time="2025-01-17T12:03:33.046429270Z" level=info msg="TearDown network for sandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" successfully" Jan 17 12:03:33.047410 containerd[2012]: time="2025-01-17T12:03:33.046489042Z" level=info msg="StopPodSandbox for \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" returns successfully" Jan 17 12:03:33.052061 systemd[1]: run-netns-cni\x2d5495c8a3\x2d966f\x2dbda3\x2ddf34\x2db89dbd93f950.mount: Deactivated successfully. Jan 17 12:03:33.193609 kubelet[3414]: I0117 12:03:33.192979 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84pnm\" (UniqueName: \"kubernetes.io/projected/c15c08ff-5739-4693-9145-35518ef5e967-kube-api-access-84pnm\") pod \"c15c08ff-5739-4693-9145-35518ef5e967\" (UID: \"c15c08ff-5739-4693-9145-35518ef5e967\") " Jan 17 12:03:33.193609 kubelet[3414]: I0117 12:03:33.193065 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15c08ff-5739-4693-9145-35518ef5e967-tigera-ca-bundle\") pod \"c15c08ff-5739-4693-9145-35518ef5e967\" (UID: \"c15c08ff-5739-4693-9145-35518ef5e967\") " Jan 17 12:03:33.200318 kubelet[3414]: I0117 12:03:33.199944 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15c08ff-5739-4693-9145-35518ef5e967-kube-api-access-84pnm" (OuterVolumeSpecName: "kube-api-access-84pnm") pod "c15c08ff-5739-4693-9145-35518ef5e967" (UID: "c15c08ff-5739-4693-9145-35518ef5e967"). InnerVolumeSpecName "kube-api-access-84pnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:03:33.205457 systemd[1]: var-lib-kubelet-pods-c15c08ff\x2d5739\x2d4693\x2d9145\x2d35518ef5e967-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d84pnm.mount: Deactivated successfully. Jan 17 12:03:33.207488 kubelet[3414]: I0117 12:03:33.206935 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15c08ff-5739-4693-9145-35518ef5e967-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c15c08ff-5739-4693-9145-35518ef5e967" (UID: "c15c08ff-5739-4693-9145-35518ef5e967"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:03:33.294681 kubelet[3414]: I0117 12:03:33.294468 3414 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-84pnm\" (UniqueName: \"kubernetes.io/projected/c15c08ff-5739-4693-9145-35518ef5e967-kube-api-access-84pnm\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:03:33.294681 kubelet[3414]: I0117 12:03:33.294514 3414 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c15c08ff-5739-4693-9145-35518ef5e967-tigera-ca-bundle\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:03:33.493938 systemd[1]: var-lib-kubelet-pods-c15c08ff\x2d5739\x2d4693\x2d9145\x2d35518ef5e967-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 17 12:03:33.806752 systemd[1]: Removed slice kubepods-besteffort-podc15c08ff_5739_4693_9145_35518ef5e967.slice - libcontainer container kubepods-besteffort-podc15c08ff_5739_4693_9145_35518ef5e967.slice. Jan 17 12:03:33.871844 kubelet[3414]: E0117 12:03:33.871769 3414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c15c08ff-5739-4693-9145-35518ef5e967" containerName="calico-kube-controllers" Jan 17 12:03:33.872534 kubelet[3414]: I0117 12:03:33.872051 3414 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15c08ff-5739-4693-9145-35518ef5e967" containerName="calico-kube-controllers" Jan 17 12:03:33.910244 systemd[1]: Created slice kubepods-besteffort-pode4eed7a8_3c33_4ff0_8eb1_109a36497577.slice - libcontainer container kubepods-besteffort-pode4eed7a8_3c33_4ff0_8eb1_109a36497577.slice. Jan 17 12:03:34.000278 kubelet[3414]: I0117 12:03:34.000173 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9fm\" (UniqueName: \"kubernetes.io/projected/e4eed7a8-3c33-4ff0-8eb1-109a36497577-kube-api-access-xv9fm\") pod \"calico-kube-controllers-c747cfd4d-fdcqv\" (UID: \"e4eed7a8-3c33-4ff0-8eb1-109a36497577\") " pod="calico-system/calico-kube-controllers-c747cfd4d-fdcqv" Jan 17 12:03:34.000278 kubelet[3414]: I0117 12:03:34.000301 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4eed7a8-3c33-4ff0-8eb1-109a36497577-tigera-ca-bundle\") pod \"calico-kube-controllers-c747cfd4d-fdcqv\" (UID: \"e4eed7a8-3c33-4ff0-8eb1-109a36497577\") " pod="calico-system/calico-kube-controllers-c747cfd4d-fdcqv" Jan 17 12:03:34.217001 containerd[2012]: time="2025-01-17T12:03:34.216802272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c747cfd4d-fdcqv,Uid:e4eed7a8-3c33-4ff0-8eb1-109a36497577,Namespace:calico-system,Attempt:0,}" Jan 17 12:03:34.538262 (udev-worker)[6333]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:34.542269 systemd-networkd[1917]: calif6e669fb885: Link UP Jan 17 12:03:34.543232 systemd-networkd[1917]: calif6e669fb885: Gained carrier Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.364 [INFO][6354] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0 calico-kube-controllers-c747cfd4d- calico-system e4eed7a8-3c33-4ff0-8eb1-109a36497577 1195 0 2025-01-17 12:03:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c747cfd4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-94 calico-kube-controllers-c747cfd4d-fdcqv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif6e669fb885 [] []}} ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.365 [INFO][6354] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.451 [INFO][6368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" HandleID="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.471 [INFO][6368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" HandleID="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ea710), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-94", "pod":"calico-kube-controllers-c747cfd4d-fdcqv", "timestamp":"2025-01-17 12:03:34.451377313 +0000 UTC"}, Hostname:"ip-172-31-18-94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.471 [INFO][6368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.471 [INFO][6368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.471 [INFO][6368] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-94' Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.475 [INFO][6368] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.485 [INFO][6368] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.494 [INFO][6368] ipam/ipam.go 489: Trying affinity for 192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.497 [INFO][6368] ipam/ipam.go 155: Attempting to load block cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.501 [INFO][6368] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.502 [INFO][6368] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.504 [INFO][6368] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04 Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.513 [INFO][6368] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.528 [INFO][6368] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.29.7/26] block=192.168.29.0/26 handle="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.528 [INFO][6368] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.29.7/26] handle="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" host="ip-172-31-18-94" Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.528 [INFO][6368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:34.580814 containerd[2012]: 2025-01-17 12:03:34.528 [INFO][6368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.29.7/26] IPv6=[] ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" HandleID="k8s-pod-network.f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.585671 containerd[2012]: 2025-01-17 12:03:34.532 [INFO][6354] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0", GenerateName:"calico-kube-controllers-c747cfd4d-", Namespace:"calico-system", SelfLink:"", UID:"e4eed7a8-3c33-4ff0-8eb1-109a36497577", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c747cfd4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"", Pod:"calico-kube-controllers-c747cfd4d-fdcqv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6e669fb885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:34.585671 containerd[2012]: 2025-01-17 12:03:34.532 [INFO][6354] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.29.7/32] ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.585671 containerd[2012]: 2025-01-17 12:03:34.532 [INFO][6354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6e669fb885 ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.585671 containerd[2012]: 2025-01-17 12:03:34.543 [INFO][6354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.585671 containerd[2012]: 2025-01-17 12:03:34.547 [INFO][6354] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0", GenerateName:"calico-kube-controllers-c747cfd4d-", Namespace:"calico-system", SelfLink:"", UID:"e4eed7a8-3c33-4ff0-8eb1-109a36497577", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c747cfd4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-94", ContainerID:"f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04", Pod:"calico-kube-controllers-c747cfd4d-fdcqv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6e669fb885", MAC:"d2:b4:5f:c1:49:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:34.585671 containerd[2012]: 2025-01-17 12:03:34.576 [INFO][6354] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04" Namespace="calico-system" Pod="calico-kube-controllers-c747cfd4d-fdcqv" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--c747cfd4d--fdcqv-eth0" Jan 17 12:03:34.651700 containerd[2012]: time="2025-01-17T12:03:34.649578362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:34.651700 containerd[2012]: time="2025-01-17T12:03:34.649692482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:34.651700 containerd[2012]: time="2025-01-17T12:03:34.649729190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.651700 containerd[2012]: time="2025-01-17T12:03:34.649917398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.711468 systemd[1]: Started cri-containerd-f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04.scope - libcontainer container f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04. Jan 17 12:03:34.817259 containerd[2012]: time="2025-01-17T12:03:34.815560767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c747cfd4d-fdcqv,Uid:e4eed7a8-3c33-4ff0-8eb1-109a36497577,Namespace:calico-system,Attempt:0,} returns sandbox id \"f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04\"" Jan 17 12:03:34.853770 containerd[2012]: time="2025-01-17T12:03:34.853698099Z" level=info msg="CreateContainer within sandbox \"f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:03:34.886592 containerd[2012]: time="2025-01-17T12:03:34.886512975Z" level=info msg="CreateContainer within sandbox \"f38e424d1b29af27872d0baf53802d6056445da7f1680d1963813d52ef339b04\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"87cb6e2193eedc5c0499d908652e4a27e36069c3a784047bb8f9323e60dd039d\"" Jan 17 12:03:34.887776 containerd[2012]: time="2025-01-17T12:03:34.887312079Z" level=info msg="StartContainer for \"87cb6e2193eedc5c0499d908652e4a27e36069c3a784047bb8f9323e60dd039d\"" Jan 17 12:03:34.944446 systemd[1]: Started cri-containerd-87cb6e2193eedc5c0499d908652e4a27e36069c3a784047bb8f9323e60dd039d.scope - libcontainer container 87cb6e2193eedc5c0499d908652e4a27e36069c3a784047bb8f9323e60dd039d. Jan 17 12:03:35.030696 containerd[2012]: time="2025-01-17T12:03:35.028988748Z" level=info msg="StartContainer for \"87cb6e2193eedc5c0499d908652e4a27e36069c3a784047bb8f9323e60dd039d\" returns successfully" Jan 17 12:03:35.106980 kubelet[3414]: I0117 12:03:35.106677 3414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15c08ff-5739-4693-9145-35518ef5e967" path="/var/lib/kubelet/pods/c15c08ff-5739-4693-9145-35518ef5e967/volumes" Jan 17 12:03:35.831508 kubelet[3414]: I0117 12:03:35.831371 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c747cfd4d-fdcqv" podStartSLOduration=2.83134342 podStartE2EDuration="2.83134342s" podCreationTimestamp="2025-01-17 12:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:35.829596952 +0000 UTC m=+83.013045497" watchObservedRunningTime="2025-01-17 12:03:35.83134342 +0000 UTC m=+83.014791977" Jan 17 12:03:36.216693 systemd[1]: Started sshd@17-172.31.18.94:22-139.178.68.195:43996.service - OpenSSH per-connection server daemon (139.178.68.195:43996). Jan 17 12:03:36.281432 systemd-networkd[1917]: calif6e669fb885: Gained IPv6LL Jan 17 12:03:36.394145 sshd[6505]: Accepted publickey for core from 139.178.68.195 port 43996 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:36.397605 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:36.405938 systemd-logind[1998]: New session 18 of user core. Jan 17 12:03:36.413448 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:03:36.696986 sshd[6505]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:36.703447 systemd[1]: sshd@17-172.31.18.94:22-139.178.68.195:43996.service: Deactivated successfully. Jan 17 12:03:36.707414 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:03:36.708970 systemd-logind[1998]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:03:36.711050 systemd-logind[1998]: Removed session 18. Jan 17 12:03:36.737614 systemd[1]: Started sshd@18-172.31.18.94:22-139.178.68.195:44004.service - OpenSSH per-connection server daemon (139.178.68.195:44004). Jan 17 12:03:36.920198 sshd[6525]: Accepted publickey for core from 139.178.68.195 port 44004 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:36.923342 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:36.931437 systemd-logind[1998]: New session 19 of user core. Jan 17 12:03:36.940402 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:03:37.497427 sshd[6525]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:37.503558 systemd[1]: sshd@18-172.31.18.94:22-139.178.68.195:44004.service: Deactivated successfully. Jan 17 12:03:37.509836 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:03:37.511208 systemd-logind[1998]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:03:37.512850 systemd-logind[1998]: Removed session 19. Jan 17 12:03:37.539613 systemd[1]: Started sshd@19-172.31.18.94:22-139.178.68.195:44012.service - OpenSSH per-connection server daemon (139.178.68.195:44012). Jan 17 12:03:37.717398 sshd[6556]: Accepted publickey for core from 139.178.68.195 port 44012 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:37.722236 sshd[6556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:37.732355 systemd-logind[1998]: New session 20 of user core. Jan 17 12:03:37.741424 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:03:38.007254 systemd[1]: cri-containerd-be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483.scope: Deactivated successfully. Jan 17 12:03:38.064430 containerd[2012]: time="2025-01-17T12:03:38.064325631Z" level=info msg="shim disconnected" id=be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483 namespace=k8s.io Jan 17 12:03:38.064430 containerd[2012]: time="2025-01-17T12:03:38.064423095Z" level=warning msg="cleaning up after shim disconnected" id=be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483 namespace=k8s.io Jan 17 12:03:38.066655 containerd[2012]: time="2025-01-17T12:03:38.064445487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:38.068907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483-rootfs.mount: Deactivated successfully. Jan 17 12:03:38.134142 containerd[2012]: time="2025-01-17T12:03:38.132634779Z" level=info msg="StopContainer for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" returns successfully" Jan 17 12:03:38.134692 containerd[2012]: time="2025-01-17T12:03:38.134558415Z" level=info msg="StopPodSandbox for \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\"" Jan 17 12:03:38.134692 containerd[2012]: time="2025-01-17T12:03:38.134625519Z" level=info msg="Container to stop \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:38.143022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5-shm.mount: Deactivated successfully. Jan 17 12:03:38.159557 systemd[1]: cri-containerd-51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5.scope: Deactivated successfully. Jan 17 12:03:38.201190 containerd[2012]: time="2025-01-17T12:03:38.199224376Z" level=info msg="shim disconnected" id=51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5 namespace=k8s.io Jan 17 12:03:38.203378 containerd[2012]: time="2025-01-17T12:03:38.201185140Z" level=warning msg="cleaning up after shim disconnected" id=51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5 namespace=k8s.io Jan 17 12:03:38.203378 containerd[2012]: time="2025-01-17T12:03:38.203192728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:38.206534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5-rootfs.mount: Deactivated successfully. Jan 17 12:03:38.244561 containerd[2012]: time="2025-01-17T12:03:38.244478896Z" level=info msg="TearDown network for sandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" successfully" Jan 17 12:03:38.244561 containerd[2012]: time="2025-01-17T12:03:38.244550128Z" level=info msg="StopPodSandbox for \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" returns successfully" Jan 17 12:03:38.444683 kubelet[3414]: I0117 12:03:38.443306 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-tigera-ca-bundle\") pod \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\" (UID: \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\") " Jan 17 12:03:38.444683 kubelet[3414]: I0117 12:03:38.443376 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7b7g\" (UniqueName: \"kubernetes.io/projected/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-kube-api-access-z7b7g\") pod \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\" (UID: \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\") " Jan 17 12:03:38.444683 kubelet[3414]: I0117 12:03:38.443418 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-typha-certs\") pod \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\" (UID: \"5b81bbcf-5d04-46fd-b144-0bcb7c79e164\") " Jan 17 12:03:38.463276 systemd[1]: var-lib-kubelet-pods-5b81bbcf\x2d5d04\x2d46fd\x2db144\x2d0bcb7c79e164-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 17 12:03:38.467088 kubelet[3414]: I0117 12:03:38.465315 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "5b81bbcf-5d04-46fd-b144-0bcb7c79e164" (UID: "5b81bbcf-5d04-46fd-b144-0bcb7c79e164"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:03:38.479151 kubelet[3414]: I0117 12:03:38.476146 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5b81bbcf-5d04-46fd-b144-0bcb7c79e164" (UID: "5b81bbcf-5d04-46fd-b144-0bcb7c79e164"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:03:38.479151 kubelet[3414]: I0117 12:03:38.478842 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-kube-api-access-z7b7g" (OuterVolumeSpecName: "kube-api-access-z7b7g") pod "5b81bbcf-5d04-46fd-b144-0bcb7c79e164" (UID: "5b81bbcf-5d04-46fd-b144-0bcb7c79e164"). InnerVolumeSpecName "kube-api-access-z7b7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:03:38.484725 systemd[1]: var-lib-kubelet-pods-5b81bbcf\x2d5d04\x2d46fd\x2db144\x2d0bcb7c79e164-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 17 12:03:38.484961 systemd[1]: var-lib-kubelet-pods-5b81bbcf\x2d5d04\x2d46fd\x2db144\x2d0bcb7c79e164-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz7b7g.mount: Deactivated successfully. Jan 17 12:03:38.544369 kubelet[3414]: I0117 12:03:38.544311 3414 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-tigera-ca-bundle\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:03:38.544369 kubelet[3414]: I0117 12:03:38.544362 3414 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z7b7g\" (UniqueName: \"kubernetes.io/projected/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-kube-api-access-z7b7g\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:03:38.544657 kubelet[3414]: I0117 12:03:38.544386 3414 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b81bbcf-5d04-46fd-b144-0bcb7c79e164-typha-certs\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:03:38.829247 kubelet[3414]: I0117 12:03:38.827560 3414 scope.go:117] "RemoveContainer" containerID="be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483" Jan 17 12:03:38.836254 containerd[2012]: time="2025-01-17T12:03:38.836196307Z" level=info msg="RemoveContainer for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\"" Jan 17 12:03:38.852038 containerd[2012]: time="2025-01-17T12:03:38.851976367Z" level=info msg="RemoveContainer for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" returns successfully" Jan 17 12:03:38.853256 kubelet[3414]: I0117 12:03:38.853058 3414 scope.go:117] "RemoveContainer" containerID="be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483" Jan 17 12:03:38.854120 containerd[2012]: time="2025-01-17T12:03:38.853789591Z" level=error msg="ContainerStatus for \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\": not found" Jan 17 12:03:38.857749 kubelet[3414]: E0117 12:03:38.856946 3414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\": not found" containerID="be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483" Jan 17 12:03:38.858215 kubelet[3414]: I0117 12:03:38.858167 3414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483"} err="failed to get container status \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\": rpc error: code = NotFound desc = an error occurred when try to find container \"be121c37b27f81db3d4fbb9f48345e9bc23e99d35cef0be32a3f709480695483\": not found" Jan 17 12:03:38.862283 systemd[1]: Removed slice kubepods-besteffort-pod5b81bbcf_5d04_46fd_b144_0bcb7c79e164.slice - libcontainer container kubepods-besteffort-pod5b81bbcf_5d04_46fd_b144_0bcb7c79e164.slice. Jan 17 12:03:39.097285 ntpd[1992]: Listen normally on 15 calif6e669fb885 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 12:03:39.097923 ntpd[1992]: 17 Jan 12:03:39 ntpd[1992]: Listen normally on 15 calif6e669fb885 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 12:03:39.097923 ntpd[1992]: 17 Jan 12:03:39 ntpd[1992]: Deleting interface #13 cali1ee17564c7f, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=37 secs Jan 17 12:03:39.097374 ntpd[1992]: Deleting interface #13 cali1ee17564c7f, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=37 secs Jan 17 12:03:39.105611 kubelet[3414]: I0117 12:03:39.105332 3414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b81bbcf-5d04-46fd-b144-0bcb7c79e164" path="/var/lib/kubelet/pods/5b81bbcf-5d04-46fd-b144-0bcb7c79e164/volumes" Jan 17 12:03:41.877143 sshd[6556]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:41.886739 systemd[1]: sshd@19-172.31.18.94:22-139.178.68.195:44012.service: Deactivated successfully. Jan 17 12:03:41.894741 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:03:41.896689 systemd[1]: session-20.scope: Consumed 1.103s CPU time. Jan 17 12:03:41.903434 systemd-logind[1998]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:03:41.931033 systemd[1]: Started sshd@20-172.31.18.94:22-139.178.68.195:44026.service - OpenSSH per-connection server daemon (139.178.68.195:44026). Jan 17 12:03:41.934040 systemd-logind[1998]: Removed session 20. Jan 17 12:03:42.122600 sshd[6718]: Accepted publickey for core from 139.178.68.195 port 44026 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:42.125893 sshd[6718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:42.135663 systemd-logind[1998]: New session 21 of user core. Jan 17 12:03:42.144796 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:03:42.735262 sshd[6718]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:42.747902 systemd[1]: sshd@20-172.31.18.94:22-139.178.68.195:44026.service: Deactivated successfully. Jan 17 12:03:42.748231 systemd-logind[1998]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:03:42.757024 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:03:42.789504 systemd[1]: Started sshd@21-172.31.18.94:22-139.178.68.195:44036.service - OpenSSH per-connection server daemon (139.178.68.195:44036). Jan 17 12:03:42.792221 systemd-logind[1998]: Removed session 21. Jan 17 12:03:42.992672 sshd[6760]: Accepted publickey for core from 139.178.68.195 port 44036 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:42.995969 sshd[6760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:43.004791 systemd-logind[1998]: New session 22 of user core. Jan 17 12:03:43.010418 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:03:43.248600 sshd[6760]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:43.255004 systemd[1]: sshd@21-172.31.18.94:22-139.178.68.195:44036.service: Deactivated successfully. Jan 17 12:03:43.258997 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:03:43.264599 systemd-logind[1998]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:03:43.266922 systemd-logind[1998]: Removed session 22. Jan 17 12:03:48.288696 systemd[1]: Started sshd@22-172.31.18.94:22-139.178.68.195:60998.service - OpenSSH per-connection server daemon (139.178.68.195:60998). Jan 17 12:03:48.466776 sshd[6857]: Accepted publickey for core from 139.178.68.195 port 60998 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:48.469573 sshd[6857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:48.483043 systemd-logind[1998]: New session 23 of user core. Jan 17 12:03:48.492859 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:03:48.743342 sshd[6857]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:48.749874 systemd[1]: sshd@22-172.31.18.94:22-139.178.68.195:60998.service: Deactivated successfully. Jan 17 12:03:48.755704 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:03:48.759210 systemd-logind[1998]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:03:48.761578 systemd-logind[1998]: Removed session 23. Jan 17 12:03:53.785730 systemd[1]: Started sshd@23-172.31.18.94:22-139.178.68.195:32772.service - OpenSSH per-connection server daemon (139.178.68.195:32772). Jan 17 12:03:53.969164 sshd[6961]: Accepted publickey for core from 139.178.68.195 port 32772 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:53.973061 sshd[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:53.985367 systemd-logind[1998]: New session 24 of user core. Jan 17 12:03:53.992500 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:03:54.240517 sshd[6961]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:54.252745 systemd[1]: sshd@23-172.31.18.94:22-139.178.68.195:32772.service: Deactivated successfully. Jan 17 12:03:54.258905 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:03:54.264752 systemd-logind[1998]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:03:54.267807 systemd-logind[1998]: Removed session 24. Jan 17 12:03:59.287725 systemd[1]: Started sshd@24-172.31.18.94:22-139.178.68.195:54484.service - OpenSSH per-connection server daemon (139.178.68.195:54484). Jan 17 12:03:59.454979 sshd[7069]: Accepted publickey for core from 139.178.68.195 port 54484 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:59.457730 sshd[7069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:59.466505 systemd-logind[1998]: New session 25 of user core. Jan 17 12:03:59.471383 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:03:59.724460 sshd[7069]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:59.731216 systemd[1]: sshd@24-172.31.18.94:22-139.178.68.195:54484.service: Deactivated successfully. Jan 17 12:03:59.736060 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:03:59.742077 systemd-logind[1998]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:03:59.746092 systemd-logind[1998]: Removed session 25. Jan 17 12:04:04.746303 containerd[2012]: time="2025-01-17T12:04:04.746147480Z" level=info msg="StopContainer for \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\" with timeout 5 (s)" Jan 17 12:04:04.750645 containerd[2012]: time="2025-01-17T12:04:04.750561296Z" level=info msg="Stop container \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\" with signal terminated" Jan 17 12:04:04.788233 systemd[1]: Started sshd@25-172.31.18.94:22-139.178.68.195:54040.service - OpenSSH per-connection server daemon (139.178.68.195:54040). Jan 17 12:04:04.803640 systemd[1]: cri-containerd-c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d.scope: Deactivated successfully. Jan 17 12:04:04.804569 systemd[1]: cri-containerd-c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d.scope: Consumed 11.827s CPU time. Jan 17 12:04:04.846709 containerd[2012]: time="2025-01-17T12:04:04.846578588Z" level=info msg="shim disconnected" id=c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d namespace=k8s.io Jan 17 12:04:04.847014 containerd[2012]: time="2025-01-17T12:04:04.846857660Z" level=warning msg="cleaning up after shim disconnected" id=c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d namespace=k8s.io Jan 17 12:04:04.847014 containerd[2012]: time="2025-01-17T12:04:04.846885668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:04.849681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d-rootfs.mount: Deactivated successfully. Jan 17 12:04:04.904722 containerd[2012]: time="2025-01-17T12:04:04.904599260Z" level=info msg="StopContainer for \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\" returns successfully" Jan 17 12:04:04.905399 containerd[2012]: time="2025-01-17T12:04:04.905348864Z" level=info msg="StopPodSandbox for \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\"" Jan 17 12:04:04.905530 containerd[2012]: time="2025-01-17T12:04:04.905421776Z" level=info msg="Container to stop \"c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:04.905530 containerd[2012]: time="2025-01-17T12:04:04.905460416Z" level=info msg="Container to stop \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:04.905530 containerd[2012]: time="2025-01-17T12:04:04.905498876Z" level=info msg="Container to stop \"ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:04.912422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403-shm.mount: Deactivated successfully. Jan 17 12:04:04.928154 systemd[1]: cri-containerd-8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403.scope: Deactivated successfully. Jan 17 12:04:04.968243 containerd[2012]: time="2025-01-17T12:04:04.967915869Z" level=info msg="shim disconnected" id=8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403 namespace=k8s.io Jan 17 12:04:04.968243 containerd[2012]: time="2025-01-17T12:04:04.968007309Z" level=warning msg="cleaning up after shim disconnected" id=8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403 namespace=k8s.io Jan 17 12:04:04.968243 containerd[2012]: time="2025-01-17T12:04:04.968032833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:04.972286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403-rootfs.mount: Deactivated successfully. Jan 17 12:04:04.973564 sshd[7237]: Accepted publickey for core from 139.178.68.195 port 54040 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:04.979623 sshd[7237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:04.990024 systemd-logind[1998]: New session 26 of user core. Jan 17 12:04:04.996621 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:04:05.018694 containerd[2012]: time="2025-01-17T12:04:05.018625889Z" level=info msg="TearDown network for sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" successfully" Jan 17 12:04:05.018694 containerd[2012]: time="2025-01-17T12:04:05.018692201Z" level=info msg="StopPodSandbox for \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" returns successfully" Jan 17 12:04:05.128145 kubelet[3414]: I0117 12:04:05.124977 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-bin-dir\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.128145 kubelet[3414]: I0117 12:04:05.125932 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x69d7\" (UniqueName: \"kubernetes.io/projected/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-kube-api-access-x69d7\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.128145 kubelet[3414]: I0117 12:04:05.126086 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-policysync\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.128145 kubelet[3414]: I0117 12:04:05.126158 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-lib-calico\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.128145 kubelet[3414]: I0117 12:04:05.126202 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-flexvol-driver-host\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.128145 kubelet[3414]: I0117 12:04:05.126258 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-tigera-ca-bundle\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.129036 kubelet[3414]: I0117 12:04:05.126305 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-run-calico\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.129036 kubelet[3414]: I0117 12:04:05.126350 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-lib-modules\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.130171 kubelet[3414]: I0117 12:04:05.125675 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.131262 kubelet[3414]: E0117 12:04:05.130351 3414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e033f8f-29ae-4b6f-bde0-0458f4589e6b" containerName="flexvol-driver" Jan 17 12:04:05.131262 kubelet[3414]: E0117 12:04:05.130401 3414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e033f8f-29ae-4b6f-bde0-0458f4589e6b" containerName="install-cni" Jan 17 12:04:05.131262 kubelet[3414]: E0117 12:04:05.130643 3414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b81bbcf-5d04-46fd-b144-0bcb7c79e164" containerName="calico-typha" Jan 17 12:04:05.131262 kubelet[3414]: E0117 12:04:05.130662 3414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e033f8f-29ae-4b6f-bde0-0458f4589e6b" containerName="calico-node" Jan 17 12:04:05.131262 kubelet[3414]: I0117 12:04:05.131139 3414 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e033f8f-29ae-4b6f-bde0-0458f4589e6b" containerName="calico-node" Jan 17 12:04:05.131262 kubelet[3414]: I0117 12:04:05.131165 3414 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b81bbcf-5d04-46fd-b144-0bcb7c79e164" containerName="calico-typha" Jan 17 12:04:05.136747 kubelet[3414]: I0117 12:04:05.135869 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-net-dir\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.136747 kubelet[3414]: I0117 12:04:05.136024 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-node-certs\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.136747 kubelet[3414]: I0117 12:04:05.136066 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-log-dir\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.137899 kubelet[3414]: I0117 12:04:05.136951 3414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-xtables-lock\") pod \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\" (UID: \"1e033f8f-29ae-4b6f-bde0-0458f4589e6b\") " Jan 17 12:04:05.137899 kubelet[3414]: I0117 12:04:05.137565 3414 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-bin-dir\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.139820 kubelet[3414]: I0117 12:04:05.139632 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.140753 kubelet[3414]: I0117 12:04:05.140703 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-kube-api-access-x69d7" (OuterVolumeSpecName: "kube-api-access-x69d7") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "kube-api-access-x69d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:04:05.141342 kubelet[3414]: I0117 12:04:05.140949 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-policysync" (OuterVolumeSpecName: "policysync") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.141342 kubelet[3414]: I0117 12:04:05.141005 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.141342 kubelet[3414]: I0117 12:04:05.141046 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.150839 kubelet[3414]: I0117 12:04:05.150784 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.152132 kubelet[3414]: I0117 12:04:05.151058 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.152132 kubelet[3414]: I0117 12:04:05.151918 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:04:05.155696 kubelet[3414]: I0117 12:04:05.155540 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.156264 kubelet[3414]: I0117 12:04:05.156209 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:05.159499 kubelet[3414]: I0117 12:04:05.159433 3414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-node-certs" (OuterVolumeSpecName: "node-certs") pod "1e033f8f-29ae-4b6f-bde0-0458f4589e6b" (UID: "1e033f8f-29ae-4b6f-bde0-0458f4589e6b"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:04:05.177028 systemd[1]: Created slice kubepods-besteffort-pod32dc076d_2084_49b7_b3b7_1e097f4277cb.slice - libcontainer container kubepods-besteffort-pod32dc076d_2084_49b7_b3b7_1e097f4277cb.slice. Jan 17 12:04:05.241559 kubelet[3414]: I0117 12:04:05.241313 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-policysync\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241559 kubelet[3414]: I0117 12:04:05.241388 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32dc076d-2084-49b7-b3b7-1e097f4277cb-node-certs\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241559 kubelet[3414]: I0117 12:04:05.241429 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-xtables-lock\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241559 kubelet[3414]: I0117 12:04:05.241466 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32dc076d-2084-49b7-b3b7-1e097f4277cb-tigera-ca-bundle\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241559 kubelet[3414]: I0117 12:04:05.241521 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-lib-modules\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241964 kubelet[3414]: I0117 12:04:05.241562 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-var-lib-calico\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241964 kubelet[3414]: I0117 12:04:05.241623 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-var-run-calico\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241964 kubelet[3414]: I0117 12:04:05.241707 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-cni-log-dir\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241964 kubelet[3414]: I0117 12:04:05.241744 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-flexvol-driver-host\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.241964 kubelet[3414]: I0117 12:04:05.241827 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-cni-bin-dir\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.241885 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32dc076d-2084-49b7-b3b7-1e097f4277cb-cni-net-dir\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.241926 3414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phsx\" (UniqueName: \"kubernetes.io/projected/32dc076d-2084-49b7-b3b7-1e097f4277cb-kube-api-access-8phsx\") pod \"calico-node-6lbbr\" (UID: \"32dc076d-2084-49b7-b3b7-1e097f4277cb\") " pod="calico-system/calico-node-6lbbr" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.241980 3414 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x69d7\" (UniqueName: \"kubernetes.io/projected/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-kube-api-access-x69d7\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.242012 3414 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-policysync\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.242035 3414 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-flexvol-driver-host\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.242055 3414 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-tigera-ca-bundle\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242267 kubelet[3414]: I0117 12:04:05.242076 3414 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-lib-calico\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242622 kubelet[3414]: I0117 12:04:05.242097 3414 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-var-run-calico\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242622 kubelet[3414]: I0117 12:04:05.242153 3414 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-lib-modules\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242622 kubelet[3414]: I0117 12:04:05.242175 3414 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-net-dir\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242622 kubelet[3414]: I0117 12:04:05.242196 3414 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-node-certs\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242622 kubelet[3414]: I0117 12:04:05.242215 3414 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-cni-log-dir\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.242622 kubelet[3414]: I0117 12:04:05.242235 3414 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e033f8f-29ae-4b6f-bde0-0458f4589e6b-xtables-lock\") on node \"ip-172-31-18-94\" DevicePath \"\"" Jan 17 12:04:05.243904 systemd[1]: var-lib-kubelet-pods-1e033f8f\x2d29ae\x2d4b6f\x2dbde0\x2d0458f4589e6b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 17 12:04:05.244145 systemd[1]: var-lib-kubelet-pods-1e033f8f\x2d29ae\x2d4b6f\x2dbde0\x2d0458f4589e6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx69d7.mount: Deactivated successfully. Jan 17 12:04:05.244303 systemd[1]: var-lib-kubelet-pods-1e033f8f\x2d29ae\x2d4b6f\x2dbde0\x2d0458f4589e6b-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 17 12:04:05.327365 sshd[7237]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:05.332298 systemd-logind[1998]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:04:05.332983 systemd[1]: sshd@25-172.31.18.94:22-139.178.68.195:54040.service: Deactivated successfully. Jan 17 12:04:05.337961 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:04:05.343356 systemd-logind[1998]: Removed session 26. Jan 17 12:04:05.491213 containerd[2012]: time="2025-01-17T12:04:05.490997167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lbbr,Uid:32dc076d-2084-49b7-b3b7-1e097f4277cb,Namespace:calico-system,Attempt:0,}" Jan 17 12:04:05.532688 containerd[2012]: time="2025-01-17T12:04:05.532504364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:04:05.532688 containerd[2012]: time="2025-01-17T12:04:05.532617344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:04:05.533215 containerd[2012]: time="2025-01-17T12:04:05.532690328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:04:05.533215 containerd[2012]: time="2025-01-17T12:04:05.532858304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:04:05.576436 systemd[1]: Started cri-containerd-2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e.scope - libcontainer container 2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e. Jan 17 12:04:05.617327 containerd[2012]: time="2025-01-17T12:04:05.616458356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lbbr,Uid:32dc076d-2084-49b7-b3b7-1e097f4277cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\"" Jan 17 12:04:05.638215 containerd[2012]: time="2025-01-17T12:04:05.638122796Z" level=info msg="CreateContainer within sandbox \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:04:05.662165 containerd[2012]: time="2025-01-17T12:04:05.662033192Z" level=info msg="CreateContainer within sandbox \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900\"" Jan 17 12:04:05.663238 containerd[2012]: time="2025-01-17T12:04:05.662979224Z" level=info msg="StartContainer for \"0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900\"" Jan 17 12:04:05.709842 systemd[1]: Started cri-containerd-0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900.scope - libcontainer container 0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900. Jan 17 12:04:05.758676 containerd[2012]: time="2025-01-17T12:04:05.758553921Z" level=info msg="StartContainer for \"0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900\" returns successfully" Jan 17 12:04:05.802592 systemd[1]: cri-containerd-0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900.scope: Deactivated successfully. Jan 17 12:04:05.871421 containerd[2012]: time="2025-01-17T12:04:05.871194765Z" level=info msg="shim disconnected" id=0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900 namespace=k8s.io Jan 17 12:04:05.871421 containerd[2012]: time="2025-01-17T12:04:05.871272441Z" level=warning msg="cleaning up after shim disconnected" id=0abe4aa24b0d30505301721578960b45535b0df01447ba7dcee023662aceb900 namespace=k8s.io Jan 17 12:04:05.871421 containerd[2012]: time="2025-01-17T12:04:05.871295385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:05.945223 containerd[2012]: time="2025-01-17T12:04:05.944636194Z" level=info msg="CreateContainer within sandbox \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:04:05.949235 kubelet[3414]: I0117 12:04:05.949047 3414 scope.go:117] "RemoveContainer" containerID="c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d" Jan 17 12:04:05.962459 containerd[2012]: time="2025-01-17T12:04:05.962005114Z" level=info msg="RemoveContainer for \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\"" Jan 17 12:04:05.980248 systemd[1]: Removed slice kubepods-besteffort-pod1e033f8f_29ae_4b6f_bde0_0458f4589e6b.slice - libcontainer container kubepods-besteffort-pod1e033f8f_29ae_4b6f_bde0_0458f4589e6b.slice. Jan 17 12:04:05.980503 systemd[1]: kubepods-besteffort-pod1e033f8f_29ae_4b6f_bde0_0458f4589e6b.slice: Consumed 12.740s CPU time. Jan 17 12:04:05.984535 containerd[2012]: time="2025-01-17T12:04:05.984433378Z" level=info msg="RemoveContainer for \"c908c64cc7cfd2edd6b3dfb9116609b69cebc2303cccac844b965f8543db9b2d\" returns successfully" Jan 17 12:04:05.986706 kubelet[3414]: I0117 12:04:05.986643 3414 scope.go:117] "RemoveContainer" containerID="ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8" Jan 17 12:04:05.993613 containerd[2012]: time="2025-01-17T12:04:05.993524962Z" level=info msg="RemoveContainer for \"ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8\"" Jan 17 12:04:05.995458 containerd[2012]: time="2025-01-17T12:04:05.995374438Z" level=info msg="CreateContainer within sandbox \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427\"" Jan 17 12:04:05.996887 containerd[2012]: time="2025-01-17T12:04:05.996820870Z" level=info msg="StartContainer for \"2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427\"" Jan 17 12:04:06.010800 containerd[2012]: time="2025-01-17T12:04:06.010723278Z" level=info msg="RemoveContainer for \"ca9d89894cda14c48d4e7ce62dd81e05c80b326bac21a8b2154fcad0e92780c8\" returns successfully" Jan 17 12:04:06.012747 kubelet[3414]: I0117 12:04:06.012590 3414 scope.go:117] "RemoveContainer" containerID="c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70" Jan 17 12:04:06.027553 containerd[2012]: time="2025-01-17T12:04:06.027478050Z" level=info msg="RemoveContainer for \"c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70\"" Jan 17 12:04:06.052511 containerd[2012]: time="2025-01-17T12:04:06.050296122Z" level=info msg="RemoveContainer for \"c4e35f3babb8a9dc9a127fb233b7cc6ba52744f601a99945bf89beb504820d70\" returns successfully" Jan 17 12:04:06.096475 systemd[1]: Started cri-containerd-2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427.scope - libcontainer container 2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427. Jan 17 12:04:06.160475 containerd[2012]: time="2025-01-17T12:04:06.159793435Z" level=info msg="StartContainer for \"2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427\" returns successfully" Jan 17 12:04:07.102605 kubelet[3414]: I0117 12:04:07.102524 3414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e033f8f-29ae-4b6f-bde0-0458f4589e6b" path="/var/lib/kubelet/pods/1e033f8f-29ae-4b6f-bde0-0458f4589e6b/volumes" Jan 17 12:04:07.180715 systemd[1]: cri-containerd-2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427.scope: Deactivated successfully. Jan 17 12:04:07.181765 systemd[1]: cri-containerd-2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427.scope: Consumed 1.087s CPU time. Jan 17 12:04:07.223994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427-rootfs.mount: Deactivated successfully. Jan 17 12:04:07.240563 containerd[2012]: time="2025-01-17T12:04:07.240470192Z" level=info msg="shim disconnected" id=2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427 namespace=k8s.io Jan 17 12:04:07.240563 containerd[2012]: time="2025-01-17T12:04:07.240548144Z" level=warning msg="cleaning up after shim disconnected" id=2c559a3da8eda05ad9b478c85fed9f0405efa1f5aff0dd86aa92cdee693c0427 namespace=k8s.io Jan 17 12:04:07.241309 containerd[2012]: time="2025-01-17T12:04:07.240570752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:07.989612 containerd[2012]: time="2025-01-17T12:04:07.989538768Z" level=info msg="CreateContainer within sandbox \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:04:08.033780 containerd[2012]: time="2025-01-17T12:04:08.033289136Z" level=info msg="CreateContainer within sandbox \"2909a1a19d93b4e289ad50f492ce2f1d6968631136f35b7ebc74e8d9e306833e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8add4324bdd5e768f5a8814c93b29381ade75edefb5ddadc62755a076cf9cfea\"" Jan 17 12:04:08.036155 containerd[2012]: time="2025-01-17T12:04:08.035013764Z" level=info msg="StartContainer for \"8add4324bdd5e768f5a8814c93b29381ade75edefb5ddadc62755a076cf9cfea\"" Jan 17 12:04:08.097404 systemd[1]: Started cri-containerd-8add4324bdd5e768f5a8814c93b29381ade75edefb5ddadc62755a076cf9cfea.scope - libcontainer container 8add4324bdd5e768f5a8814c93b29381ade75edefb5ddadc62755a076cf9cfea. Jan 17 12:04:08.153713 containerd[2012]: time="2025-01-17T12:04:08.153635637Z" level=info msg="StartContainer for \"8add4324bdd5e768f5a8814c93b29381ade75edefb5ddadc62755a076cf9cfea\" returns successfully" Jan 17 12:04:09.017589 kubelet[3414]: I0117 12:04:09.016637 3414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6lbbr" podStartSLOduration=4.016592745 podStartE2EDuration="4.016592745s" podCreationTimestamp="2025-01-17 12:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:04:09.011926485 +0000 UTC m=+116.195375018" watchObservedRunningTime="2025-01-17 12:04:09.016592745 +0000 UTC m=+116.200041278" Jan 17 12:04:10.379272 systemd[1]: Started sshd@26-172.31.18.94:22-139.178.68.195:54046.service - OpenSSH per-connection server daemon (139.178.68.195:54046). Jan 17 12:04:10.544704 (udev-worker)[7733]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:04:10.546499 (udev-worker)[7731]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:04:10.584387 sshd[7705]: Accepted publickey for core from 139.178.68.195 port 54046 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:10.588315 sshd[7705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:10.596175 systemd-logind[1998]: New session 27 of user core. Jan 17 12:04:10.602375 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:04:10.879193 sshd[7705]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:10.887265 systemd[1]: sshd@26-172.31.18.94:22-139.178.68.195:54046.service: Deactivated successfully. Jan 17 12:04:10.892516 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:04:10.898908 systemd-logind[1998]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:04:10.901795 systemd-logind[1998]: Removed session 27. Jan 17 12:04:15.348019 kubelet[3414]: I0117 12:04:15.347852 3414 scope.go:117] "RemoveContainer" containerID="ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2" Jan 17 12:04:15.350662 containerd[2012]: time="2025-01-17T12:04:15.350614948Z" level=info msg="RemoveContainer for \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\"" Jan 17 12:04:15.358006 containerd[2012]: time="2025-01-17T12:04:15.357876652Z" level=info msg="RemoveContainer for \"ca1295965302fbcbb0b8365c227640289de4099ef86173c2f6c80d8bee5288e2\" returns successfully" Jan 17 12:04:15.362821 containerd[2012]: time="2025-01-17T12:04:15.362319112Z" level=info msg="StopPodSandbox for \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\"" Jan 17 12:04:15.362821 containerd[2012]: time="2025-01-17T12:04:15.362569780Z" level=info msg="TearDown network for sandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" successfully" Jan 17 12:04:15.362821 containerd[2012]: time="2025-01-17T12:04:15.362599684Z" level=info msg="StopPodSandbox for \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" returns successfully" Jan 17 12:04:15.365453 containerd[2012]: time="2025-01-17T12:04:15.364821748Z" level=info msg="RemovePodSandbox for \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\"" Jan 17 12:04:15.365453 containerd[2012]: time="2025-01-17T12:04:15.364895512Z" level=info msg="Forcibly stopping sandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\"" Jan 17 12:04:15.365453 containerd[2012]: time="2025-01-17T12:04:15.365047096Z" level=info msg="TearDown network for sandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" successfully" Jan 17 12:04:15.372966 containerd[2012]: time="2025-01-17T12:04:15.372909964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:04:15.373325 containerd[2012]: time="2025-01-17T12:04:15.373289980Z" level=info msg="RemovePodSandbox \"51bbb01870515bb004aea0c4e32bcbe316a65fc7fc85a02b09e0fbec0eb6a2d5\" returns successfully" Jan 17 12:04:15.374354 containerd[2012]: time="2025-01-17T12:04:15.374027992Z" level=info msg="StopPodSandbox for \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\"" Jan 17 12:04:15.374354 containerd[2012]: time="2025-01-17T12:04:15.374212156Z" level=info msg="TearDown network for sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" successfully" Jan 17 12:04:15.374354 containerd[2012]: time="2025-01-17T12:04:15.374238880Z" level=info msg="StopPodSandbox for \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" returns successfully" Jan 17 12:04:15.375483 containerd[2012]: time="2025-01-17T12:04:15.375320620Z" level=info msg="RemovePodSandbox for \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\"" Jan 17 12:04:15.375483 containerd[2012]: time="2025-01-17T12:04:15.375377872Z" level=info msg="Forcibly stopping sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\"" Jan 17 12:04:15.375995 containerd[2012]: time="2025-01-17T12:04:15.375482200Z" level=info msg="TearDown network for sandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" successfully" Jan 17 12:04:15.382322 containerd[2012]: time="2025-01-17T12:04:15.382218220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:04:15.382322 containerd[2012]: time="2025-01-17T12:04:15.382318948Z" level=info msg="RemovePodSandbox \"8fca25f4f8bbff1a868029f95a8d75366ae87ca7baf13b7b27e6b54a0c68f403\" returns successfully" Jan 17 12:04:15.383158 containerd[2012]: time="2025-01-17T12:04:15.382986412Z" level=info msg="StopPodSandbox for \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\"" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.468 [WARNING][7799] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.469 [INFO][7799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.469 [INFO][7799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" iface="eth0" netns="" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.469 [INFO][7799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.469 [INFO][7799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.507 [INFO][7805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.507 [INFO][7805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.507 [INFO][7805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.521 [WARNING][7805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.521 [INFO][7805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.523 [INFO][7805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:04:15.528560 containerd[2012]: 2025-01-17 12:04:15.525 [INFO][7799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.529758 containerd[2012]: time="2025-01-17T12:04:15.528613505Z" level=info msg="TearDown network for sandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" successfully" Jan 17 12:04:15.529758 containerd[2012]: time="2025-01-17T12:04:15.528652661Z" level=info msg="StopPodSandbox for \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" returns successfully" Jan 17 12:04:15.530537 containerd[2012]: time="2025-01-17T12:04:15.530326373Z" level=info msg="RemovePodSandbox for \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\"" Jan 17 12:04:15.530537 containerd[2012]: time="2025-01-17T12:04:15.530413805Z" level=info msg="Forcibly stopping sandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\"" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.597 [WARNING][7824] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" WorkloadEndpoint="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.597 [INFO][7824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.597 [INFO][7824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" iface="eth0" netns="" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.597 [INFO][7824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.598 [INFO][7824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.640 [INFO][7830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.640 [INFO][7830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.640 [INFO][7830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.653 [WARNING][7830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.654 [INFO][7830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" HandleID="k8s-pod-network.64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Workload="ip--172--31--18--94-k8s-calico--kube--controllers--6c7869774c--nt5td-eth0" Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.656 [INFO][7830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:04:15.661998 containerd[2012]: 2025-01-17 12:04:15.659 [INFO][7824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872" Jan 17 12:04:15.661998 containerd[2012]: time="2025-01-17T12:04:15.661921086Z" level=info msg="TearDown network for sandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" successfully" Jan 17 12:04:15.672641 containerd[2012]: time="2025-01-17T12:04:15.672548766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:04:15.672641 containerd[2012]: time="2025-01-17T12:04:15.672651150Z" level=info msg="RemovePodSandbox \"64c6102c5f70daf74e39603a913f808a4a5c0edd93fc0cb08bd3ab51f8a35872\" returns successfully" Jan 17 12:04:15.917642 systemd[1]: Started sshd@27-172.31.18.94:22-139.178.68.195:34898.service - OpenSSH per-connection server daemon (139.178.68.195:34898). Jan 17 12:04:16.104712 sshd[7837]: Accepted publickey for core from 139.178.68.195 port 34898 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:16.107920 sshd[7837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:16.116898 systemd-logind[1998]: New session 28 of user core. Jan 17 12:04:16.129387 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:04:16.383793 sshd[7837]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:16.391289 systemd[1]: sshd@27-172.31.18.94:22-139.178.68.195:34898.service: Deactivated successfully. Jan 17 12:04:16.395570 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:04:16.397379 systemd-logind[1998]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:04:16.399087 systemd-logind[1998]: Removed session 28. Jan 17 12:04:30.098518 systemd[1]: cri-containerd-f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167.scope: Deactivated successfully. Jan 17 12:04:30.100226 systemd[1]: cri-containerd-f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167.scope: Consumed 7.390s CPU time, 18.6M memory peak, 0B memory swap peak. Jan 17 12:04:30.146526 containerd[2012]: time="2025-01-17T12:04:30.146405886Z" level=info msg="shim disconnected" id=f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167 namespace=k8s.io Jan 17 12:04:30.146526 containerd[2012]: time="2025-01-17T12:04:30.146515302Z" level=warning msg="cleaning up after shim disconnected" id=f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167 namespace=k8s.io Jan 17 12:04:30.146526 containerd[2012]: time="2025-01-17T12:04:30.146541150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:30.150838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167-rootfs.mount: Deactivated successfully. Jan 17 12:04:30.170159 containerd[2012]: time="2025-01-17T12:04:30.170020038Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:04:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:04:31.030588 kubelet[3414]: I0117 12:04:31.030473 3414 scope.go:117] "RemoveContainer" containerID="f4767e520b7fd6a78ccbf6b83b63a6be16e14350cee6f44f7adeeb3eeb467167" Jan 17 12:04:31.034957 containerd[2012]: time="2025-01-17T12:04:31.034883334Z" level=info msg="CreateContainer within sandbox \"cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 12:04:31.061883 containerd[2012]: time="2025-01-17T12:04:31.061801914Z" level=info msg="CreateContainer within sandbox \"cb889563250a9cc5560490bb1c66763f04d4f843bc082d18792964ba3111dffe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b515aa4e25f2e1f8b2a9469acf1164dbfd8d1d3e8b89b46076fa16afabd85c83\"" Jan 17 12:04:31.062825 containerd[2012]: time="2025-01-17T12:04:31.062766630Z" level=info msg="StartContainer for \"b515aa4e25f2e1f8b2a9469acf1164dbfd8d1d3e8b89b46076fa16afabd85c83\"" Jan 17 12:04:31.123444 systemd[1]: Started cri-containerd-b515aa4e25f2e1f8b2a9469acf1164dbfd8d1d3e8b89b46076fa16afabd85c83.scope - libcontainer container b515aa4e25f2e1f8b2a9469acf1164dbfd8d1d3e8b89b46076fa16afabd85c83. Jan 17 12:04:31.158682 systemd[1]: cri-containerd-03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65.scope: Deactivated successfully. Jan 17 12:04:31.160258 systemd[1]: cri-containerd-03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65.scope: Consumed 7.221s CPU time. Jan 17 12:04:31.215584 containerd[2012]: time="2025-01-17T12:04:31.214925431Z" level=info msg="shim disconnected" id=03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65 namespace=k8s.io Jan 17 12:04:31.216875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65-rootfs.mount: Deactivated successfully. Jan 17 12:04:31.219677 containerd[2012]: time="2025-01-17T12:04:31.215417983Z" level=warning msg="cleaning up after shim disconnected" id=03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65 namespace=k8s.io Jan 17 12:04:31.219677 containerd[2012]: time="2025-01-17T12:04:31.219088111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:31.229937 containerd[2012]: time="2025-01-17T12:04:31.229832575Z" level=info msg="StartContainer for \"b515aa4e25f2e1f8b2a9469acf1164dbfd8d1d3e8b89b46076fa16afabd85c83\" returns successfully" Jan 17 12:04:32.037887 kubelet[3414]: I0117 12:04:32.037555 3414 scope.go:117] "RemoveContainer" containerID="03846f11f127a2a20f2238ccb810e703573434b8322a56ed042470072ffb7e65" Jan 17 12:04:32.043036 containerd[2012]: time="2025-01-17T12:04:32.041858143Z" level=info msg="CreateContainer within sandbox \"f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 12:04:32.074564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568213353.mount: Deactivated successfully. Jan 17 12:04:32.079155 containerd[2012]: time="2025-01-17T12:04:32.078600775Z" level=info msg="CreateContainer within sandbox \"f6749621ef7239ae9236316e3ae087c047fc086bebae012e73f98cc52b5b0a5d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2ac6520019f97b9cd4adec19c121f89851ece11b974ec7d880545dbde1957efa\"" Jan 17 12:04:32.085141 containerd[2012]: time="2025-01-17T12:04:32.084096979Z" level=info msg="StartContainer for \"2ac6520019f97b9cd4adec19c121f89851ece11b974ec7d880545dbde1957efa\"" Jan 17 12:04:32.156441 systemd[1]: Started cri-containerd-2ac6520019f97b9cd4adec19c121f89851ece11b974ec7d880545dbde1957efa.scope - libcontainer container 2ac6520019f97b9cd4adec19c121f89851ece11b974ec7d880545dbde1957efa. Jan 17 12:04:32.209116 containerd[2012]: time="2025-01-17T12:04:32.209044760Z" level=info msg="StartContainer for \"2ac6520019f97b9cd4adec19c121f89851ece11b974ec7d880545dbde1957efa\" returns successfully" Jan 17 12:04:32.216207 systemd[1]: run-containerd-runc-k8s.io-2ac6520019f97b9cd4adec19c121f89851ece11b974ec7d880545dbde1957efa-runc.LIvYbA.mount: Deactivated successfully. Jan 17 12:04:35.529749 systemd[1]: run-containerd-runc-k8s.io-8add4324bdd5e768f5a8814c93b29381ade75edefb5ddadc62755a076cf9cfea-runc.QDKPgo.mount: Deactivated successfully. Jan 17 12:04:35.805443 kubelet[3414]: E0117 12:04:35.805001 3414 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-94?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:04:36.087457 systemd[1]: cri-containerd-ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145.scope: Deactivated successfully. Jan 17 12:04:36.088321 systemd[1]: cri-containerd-ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145.scope: Consumed 3.855s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 17 12:04:36.128142 containerd[2012]: time="2025-01-17T12:04:36.125978304Z" level=info msg="shim disconnected" id=ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145 namespace=k8s.io Jan 17 12:04:36.128142 containerd[2012]: time="2025-01-17T12:04:36.126081852Z" level=warning msg="cleaning up after shim disconnected" id=ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145 namespace=k8s.io Jan 17 12:04:36.128142 containerd[2012]: time="2025-01-17T12:04:36.126156408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:36.129545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145-rootfs.mount: Deactivated successfully. Jan 17 12:04:37.065138 kubelet[3414]: I0117 12:04:37.064836 3414 scope.go:117] "RemoveContainer" containerID="ca1ca3d9b7177ed58c935a214c23e9d9d13506684b12f53a1f9b82b645aab145" Jan 17 12:04:37.067927 containerd[2012]: time="2025-01-17T12:04:37.067872516Z" level=info msg="CreateContainer within sandbox \"286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 12:04:37.091076 containerd[2012]: time="2025-01-17T12:04:37.090998700Z" level=info msg="CreateContainer within sandbox \"286ffce2187eee814e3290433da7fdca8dc297168a1360f64f1a0838b044f541\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"59d48c91f103a01f4b6787b0024894d57dfc904fc91240b66cd3537b30a01ef7\"" Jan 17 12:04:37.092449 containerd[2012]: time="2025-01-17T12:04:37.092036964Z" level=info msg="StartContainer for \"59d48c91f103a01f4b6787b0024894d57dfc904fc91240b66cd3537b30a01ef7\"" Jan 17 12:04:37.156400 systemd[1]: Started cri-containerd-59d48c91f103a01f4b6787b0024894d57dfc904fc91240b66cd3537b30a01ef7.scope - libcontainer container 59d48c91f103a01f4b6787b0024894d57dfc904fc91240b66cd3537b30a01ef7. Jan 17 12:04:37.218775 containerd[2012]: time="2025-01-17T12:04:37.218694625Z" level=info msg="StartContainer for \"59d48c91f103a01f4b6787b0024894d57dfc904fc91240b66cd3537b30a01ef7\" returns successfully" Jan 17 12:04:41.116892 update_engine[1999]: I20250117 12:04:41.116680 1999 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 12:04:41.116892 update_engine[1999]: I20250117 12:04:41.116760 1999 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 12:04:41.117520 update_engine[1999]: I20250117 12:04:41.117161 1999 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 12:04:41.119382 update_engine[1999]: I20250117 12:04:41.118986 1999 omaha_request_params.cc:62] Current group set to lts Jan 17 12:04:41.119382 update_engine[1999]: I20250117 12:04:41.119196 1999 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 12:04:41.119382 update_engine[1999]: I20250117 12:04:41.119221 1999 update_attempter.cc:643] Scheduling an action processor start. Jan 17 12:04:41.119382 update_engine[1999]: I20250117 12:04:41.119254 1999 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:04:41.119382 update_engine[1999]: I20250117 12:04:41.119323 1999 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 12:04:41.119723 update_engine[1999]: I20250117 12:04:41.119447 1999 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:04:41.119723 update_engine[1999]: I20250117 12:04:41.119469 1999 omaha_request_action.cc:272] Request: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: Jan 17 12:04:41.119723 update_engine[1999]: I20250117 12:04:41.119486 1999 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:04:41.120448 locksmithd[2034]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 12:04:41.123513 update_engine[1999]: I20250117 12:04:41.123441 1999 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:04:41.123952 update_engine[1999]: I20250117 12:04:41.123899 1999 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:04:41.126584 update_engine[1999]: E20250117 12:04:41.126523 1999 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:04:41.126680 update_engine[1999]: I20250117 12:04:41.126636 1999 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 12:04:45.806131 kubelet[3414]: E0117 12:04:45.805905 3414 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-94?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"