Nov 12 17:42:07.198145 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 12 17:42:07.198193 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:42:07.198220 kernel: KASLR disabled due to lack of seed Nov 12 17:42:07.198237 kernel: efi: EFI v2.7 by EDK II Nov 12 17:42:07.198253 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Nov 12 17:42:07.198269 kernel: ACPI: Early table checksum verification disabled Nov 12 17:42:07.198286 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 12 17:42:07.198302 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 12 17:42:07.198319 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 12 17:42:07.198334 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 12 17:42:07.198355 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 12 17:42:07.198371 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 12 17:42:07.198387 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 12 17:42:07.198403 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 12 17:42:07.198422 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 12 17:42:07.198443 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 12 17:42:07.198460 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 12 17:42:07.198477 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 12 17:42:07.198494 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 12 17:42:07.198511 kernel: printk: bootconsole [uart0] enabled Nov 12 17:42:07.198527 kernel: NUMA: Failed to initialise from firmware Nov 12 17:42:07.198544 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 12 17:42:07.198561 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Nov 12 17:42:07.198578 kernel: Zone ranges: Nov 12 17:42:07.198594 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 12 17:42:07.198611 kernel: DMA32 empty Nov 12 17:42:07.198632 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 12 17:42:07.198649 kernel: Movable zone start for each node Nov 12 17:42:07.198665 kernel: Early memory node ranges Nov 12 17:42:07.198682 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 12 17:42:07.198698 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 12 17:42:07.198715 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 12 17:42:07.198732 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 12 17:42:07.198749 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 12 17:42:07.198765 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 12 17:42:07.198782 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 12 17:42:07.198799 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 12 17:42:07.198816 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 12 17:42:07.198836 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 12 17:42:07.198854 kernel: psci: probing for conduit method from ACPI. Nov 12 17:42:07.198877 kernel: psci: PSCIv1.0 detected in firmware. Nov 12 17:42:07.198895 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:42:07.198913 kernel: psci: Trusted OS migration not required Nov 12 17:42:07.198934 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:42:07.198975 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:42:07.198996 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:42:07.199014 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 12 17:42:07.199032 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:42:07.199050 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:42:07.199067 kernel: CPU features: detected: Spectre-v2 Nov 12 17:42:07.199085 kernel: CPU features: detected: Spectre-v3a Nov 12 17:42:07.199102 kernel: CPU features: detected: Spectre-BHB Nov 12 17:42:07.199120 kernel: CPU features: detected: ARM erratum 1742098 Nov 12 17:42:07.199137 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 12 17:42:07.199161 kernel: alternatives: applying boot alternatives Nov 12 17:42:07.199182 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:42:07.199200 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:42:07.199218 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:42:07.199236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:42:07.199254 kernel: Fallback order for Node 0: 0 Nov 12 17:42:07.199271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Nov 12 17:42:07.199289 kernel: Policy zone: Normal Nov 12 17:42:07.199307 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:42:07.199324 kernel: software IO TLB: area num 2. Nov 12 17:42:07.199342 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Nov 12 17:42:07.199365 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Nov 12 17:42:07.199383 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 17:42:07.199400 kernel: trace event string verifier disabled Nov 12 17:42:07.199418 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:42:07.199437 kernel: rcu: RCU event tracing is enabled. Nov 12 17:42:07.199455 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 17:42:07.199473 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:42:07.199491 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:42:07.199509 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:42:07.199527 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 17:42:07.199544 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:42:07.199565 kernel: GICv3: 96 SPIs implemented Nov 12 17:42:07.199601 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:42:07.199622 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:42:07.199639 kernel: GICv3: GICv3 features: 16 PPIs Nov 12 17:42:07.199657 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 12 17:42:07.199675 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 12 17:42:07.199693 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:42:07.199712 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:42:07.199730 kernel: GICv3: using LPI property table @0x00000004000d0000 Nov 12 17:42:07.199747 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 12 17:42:07.199765 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Nov 12 17:42:07.199782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:42:07.199806 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 12 17:42:07.199824 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 12 17:42:07.199842 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 12 17:42:07.199860 kernel: Console: colour dummy device 80x25 Nov 12 17:42:07.199878 kernel: printk: console [tty1] enabled Nov 12 17:42:07.199896 kernel: ACPI: Core revision 20230628 Nov 12 17:42:07.199915 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 12 17:42:07.199933 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:42:07.203122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:42:07.203153 kernel: landlock: Up and running. Nov 12 17:42:07.203181 kernel: SELinux: Initializing. Nov 12 17:42:07.203200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:42:07.203219 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:42:07.203237 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 17:42:07.203256 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 17:42:07.203274 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:42:07.203293 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:42:07.203311 kernel: Platform MSI: ITS@0x10080000 domain created Nov 12 17:42:07.203333 kernel: PCI/MSI: ITS@0x10080000 domain created Nov 12 17:42:07.203351 kernel: Remapping and enabling EFI services. Nov 12 17:42:07.203369 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:42:07.203387 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:42:07.203405 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 12 17:42:07.203424 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Nov 12 17:42:07.203442 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 12 17:42:07.203460 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 17:42:07.203478 kernel: SMP: Total of 2 processors activated. Nov 12 17:42:07.203496 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:42:07.203519 kernel: CPU features: detected: 32-bit EL1 Support Nov 12 17:42:07.203537 kernel: CPU features: detected: CRC32 instructions Nov 12 17:42:07.203567 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:42:07.203609 kernel: alternatives: applying system-wide alternatives Nov 12 17:42:07.203628 kernel: devtmpfs: initialized Nov 12 17:42:07.203647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:42:07.203667 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 17:42:07.203686 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:42:07.203705 kernel: SMBIOS 3.0.0 present. Nov 12 17:42:07.203730 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 12 17:42:07.203749 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:42:07.203767 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:42:07.203787 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:42:07.203806 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:42:07.203824 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:42:07.203844 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Nov 12 17:42:07.203866 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:42:07.203885 kernel: cpuidle: using governor menu Nov 12 17:42:07.203904 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:42:07.203923 kernel: ASID allocator initialised with 65536 entries Nov 12 17:42:07.203942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:42:07.203990 kernel: Serial: AMBA PL011 UART driver Nov 12 17:42:07.204010 kernel: Modules: 17520 pages in range for non-PLT usage Nov 12 17:42:07.204029 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:42:07.204048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:42:07.204073 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:42:07.204092 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:42:07.204111 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:42:07.204130 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:42:07.204149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:42:07.204168 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:42:07.204186 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:42:07.204205 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:42:07.204224 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:42:07.204247 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:42:07.204265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:42:07.204284 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:42:07.204303 kernel: ACPI: Interpreter enabled Nov 12 17:42:07.204322 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:42:07.204340 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:42:07.204359 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 12 17:42:07.204659 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:42:07.204881 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:42:07.206212 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:42:07.206443 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 12 17:42:07.206653 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 12 17:42:07.206679 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 12 17:42:07.206700 kernel: acpiphp: Slot [1] registered Nov 12 17:42:07.206719 kernel: acpiphp: Slot [2] registered Nov 12 17:42:07.206738 kernel: acpiphp: Slot [3] registered Nov 12 17:42:07.206768 kernel: acpiphp: Slot [4] registered Nov 12 17:42:07.206787 kernel: acpiphp: Slot [5] registered Nov 12 17:42:07.206806 kernel: acpiphp: Slot [6] registered Nov 12 17:42:07.206825 kernel: acpiphp: Slot [7] registered Nov 12 17:42:07.206844 kernel: acpiphp: Slot [8] registered Nov 12 17:42:07.206862 kernel: acpiphp: Slot [9] registered Nov 12 17:42:07.206881 kernel: acpiphp: Slot [10] registered Nov 12 17:42:07.206900 kernel: acpiphp: Slot [11] registered Nov 12 17:42:07.206919 kernel: acpiphp: Slot [12] registered Nov 12 17:42:07.206937 kernel: acpiphp: Slot [13] registered Nov 12 17:42:07.206992 kernel: acpiphp: Slot [14] registered Nov 12 17:42:07.207012 kernel: acpiphp: Slot [15] registered Nov 12 17:42:07.207031 kernel: acpiphp: Slot [16] registered Nov 12 17:42:07.207050 kernel: acpiphp: Slot [17] registered Nov 12 17:42:07.207068 kernel: acpiphp: Slot [18] registered Nov 12 17:42:07.207087 kernel: acpiphp: Slot [19] registered Nov 12 17:42:07.207106 kernel: acpiphp: Slot [20] registered Nov 12 17:42:07.207124 kernel: acpiphp: Slot [21] registered Nov 12 17:42:07.207143 kernel: acpiphp: Slot [22] registered Nov 12 17:42:07.207168 kernel: acpiphp: Slot [23] registered Nov 12 17:42:07.207186 kernel: acpiphp: Slot [24] registered Nov 12 17:42:07.207205 kernel: acpiphp: Slot [25] registered Nov 12 17:42:07.207223 kernel: acpiphp: Slot [26] registered Nov 12 17:42:07.207242 kernel: acpiphp: Slot [27] registered Nov 12 17:42:07.207261 kernel: acpiphp: Slot [28] registered Nov 12 17:42:07.207280 kernel: acpiphp: Slot [29] registered Nov 12 17:42:07.207299 kernel: acpiphp: Slot [30] registered Nov 12 17:42:07.207318 kernel: acpiphp: Slot [31] registered Nov 12 17:42:07.207337 kernel: PCI host bridge to bus 0000:00 Nov 12 17:42:07.207567 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 12 17:42:07.207808 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:42:07.209620 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 12 17:42:07.209831 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 12 17:42:07.210113 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Nov 12 17:42:07.210355 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Nov 12 17:42:07.210583 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Nov 12 17:42:07.210834 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 12 17:42:07.211709 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Nov 12 17:42:07.211991 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 17:42:07.212230 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 12 17:42:07.212445 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Nov 12 17:42:07.212658 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Nov 12 17:42:07.212879 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Nov 12 17:42:07.213134 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 17:42:07.213351 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Nov 12 17:42:07.213563 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Nov 12 17:42:07.213777 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Nov 12 17:42:07.214053 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Nov 12 17:42:07.214290 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Nov 12 17:42:07.214499 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 12 17:42:07.214692 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:42:07.214887 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 12 17:42:07.214914 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:42:07.214935 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:42:07.215037 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:42:07.215060 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:42:07.215079 kernel: iommu: Default domain type: Translated Nov 12 17:42:07.215106 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:42:07.215125 kernel: efivars: Registered efivars operations Nov 12 17:42:07.215144 kernel: vgaarb: loaded Nov 12 17:42:07.215164 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:42:07.215183 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:42:07.215202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:42:07.215223 kernel: pnp: PnP ACPI init Nov 12 17:42:07.215454 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 12 17:42:07.215487 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:42:07.215506 kernel: NET: Registered PF_INET protocol family Nov 12 17:42:07.215526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:42:07.215545 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:42:07.215564 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:42:07.215604 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:42:07.215625 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:42:07.215645 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:42:07.215664 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:42:07.215689 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:42:07.215708 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:42:07.215727 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:42:07.215746 kernel: kvm [1]: HYP mode not available Nov 12 17:42:07.215765 kernel: Initialise system trusted keyrings Nov 12 17:42:07.215785 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:42:07.215803 kernel: Key type asymmetric registered Nov 12 17:42:07.215822 kernel: Asymmetric key parser 'x509' registered Nov 12 17:42:07.215841 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:42:07.215864 kernel: io scheduler mq-deadline registered Nov 12 17:42:07.215883 kernel: io scheduler kyber registered Nov 12 17:42:07.215901 kernel: io scheduler bfq registered Nov 12 17:42:07.216176 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 12 17:42:07.216207 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:42:07.216226 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:42:07.216248 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 12 17:42:07.216313 kernel: ACPI: button: Sleep Button [SLPB] Nov 12 17:42:07.216346 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:42:07.216367 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 12 17:42:07.216594 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 12 17:42:07.216621 kernel: printk: console [ttyS0] disabled Nov 12 17:42:07.216641 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 12 17:42:07.216661 kernel: printk: console [ttyS0] enabled Nov 12 17:42:07.216680 kernel: printk: bootconsole [uart0] disabled Nov 12 17:42:07.216699 kernel: thunder_xcv, ver 1.0 Nov 12 17:42:07.216718 kernel: thunder_bgx, ver 1.0 Nov 12 17:42:07.216736 kernel: nicpf, ver 1.0 Nov 12 17:42:07.216761 kernel: nicvf, ver 1.0 Nov 12 17:42:07.217091 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:42:07.217326 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:42:06 UTC (1731433326) Nov 12 17:42:07.217356 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:42:07.217377 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Nov 12 17:42:07.217398 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:42:07.217418 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:42:07.217444 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:42:07.217465 kernel: Segment Routing with IPv6 Nov 12 17:42:07.217485 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:42:07.217504 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:42:07.217524 kernel: Key type dns_resolver registered Nov 12 17:42:07.217543 kernel: registered taskstats version 1 Nov 12 17:42:07.217563 kernel: Loading compiled-in X.509 certificates Nov 12 17:42:07.217582 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:42:07.217602 kernel: Key type .fscrypt registered Nov 12 17:42:07.217621 kernel: Key type fscrypt-provisioning registered Nov 12 17:42:07.217646 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:42:07.217666 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:42:07.217685 kernel: ima: No architecture policies found Nov 12 17:42:07.217707 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:42:07.217728 kernel: clk: Disabling unused clocks Nov 12 17:42:07.217747 kernel: Freeing unused kernel memory: 39360K Nov 12 17:42:07.217767 kernel: Run /init as init process Nov 12 17:42:07.217787 kernel: with arguments: Nov 12 17:42:07.217808 kernel: /init Nov 12 17:42:07.217833 kernel: with environment: Nov 12 17:42:07.217852 kernel: HOME=/ Nov 12 17:42:07.217872 kernel: TERM=linux Nov 12 17:42:07.217891 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:42:07.217915 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:42:07.217942 systemd[1]: Detected virtualization amazon. Nov 12 17:42:07.218066 systemd[1]: Detected architecture arm64. Nov 12 17:42:07.218096 systemd[1]: Running in initrd. Nov 12 17:42:07.218117 systemd[1]: No hostname configured, using default hostname. Nov 12 17:42:07.218137 systemd[1]: Hostname set to . Nov 12 17:42:07.218158 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:42:07.218178 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:42:07.218199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:42:07.218219 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:42:07.218242 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:42:07.218268 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:42:07.218289 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:42:07.218311 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:42:07.218335 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:42:07.218356 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:42:07.218377 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:42:07.218398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:42:07.218423 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:42:07.218445 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:42:07.218466 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:42:07.218486 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:42:07.218507 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:42:07.218528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:42:07.218549 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:42:07.218570 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:42:07.218591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:42:07.218617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:42:07.218638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:42:07.218658 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:42:07.218679 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:42:07.218700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:42:07.218720 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:42:07.218741 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:42:07.218761 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:42:07.218786 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:42:07.218807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:42:07.218828 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:42:07.218848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:42:07.218869 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:42:07.218891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:42:07.219025 systemd-journald[250]: Collecting audit messages is disabled. Nov 12 17:42:07.219074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:42:07.219094 kernel: Bridge firewalling registered Nov 12 17:42:07.219121 systemd-journald[250]: Journal started Nov 12 17:42:07.219159 systemd-journald[250]: Runtime Journal (/run/log/journal/ec299f50e08c26e7b917b67356b2701d) is 8.0M, max 75.3M, 67.3M free. Nov 12 17:42:07.189252 systemd-modules-load[251]: Inserted module 'overlay' Nov 12 17:42:07.223018 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:42:07.221362 systemd-modules-load[251]: Inserted module 'br_netfilter' Nov 12 17:42:07.228862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:42:07.233758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:42:07.239645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:42:07.261457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:42:07.268291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:42:07.282282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:42:07.290268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:42:07.320045 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:42:07.341022 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:42:07.342332 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:42:07.375487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:42:07.379823 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:42:07.392273 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:42:07.431424 dracut-cmdline[290]: dracut-dracut-053 Nov 12 17:42:07.439026 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:42:07.464320 systemd-resolved[286]: Positive Trust Anchors: Nov 12 17:42:07.465443 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:42:07.465511 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:42:07.616985 kernel: SCSI subsystem initialized Nov 12 17:42:07.623979 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:42:07.636987 kernel: iscsi: registered transport (tcp) Nov 12 17:42:07.659533 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:42:07.659631 kernel: QLogic iSCSI HBA Driver Nov 12 17:42:07.705000 kernel: random: crng init done Nov 12 17:42:07.705764 systemd-resolved[286]: Defaulting to hostname 'linux'. Nov 12 17:42:07.712840 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:42:07.722694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:42:07.744049 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:42:07.755314 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:42:07.794366 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:42:07.794452 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:42:07.794480 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:42:07.863999 kernel: raid6: neonx8 gen() 6708 MB/s Nov 12 17:42:07.880988 kernel: raid6: neonx4 gen() 6534 MB/s Nov 12 17:42:07.897991 kernel: raid6: neonx2 gen() 5427 MB/s Nov 12 17:42:07.914983 kernel: raid6: neonx1 gen() 3956 MB/s Nov 12 17:42:07.931990 kernel: raid6: int64x8 gen() 3827 MB/s Nov 12 17:42:07.948993 kernel: raid6: int64x4 gen() 3714 MB/s Nov 12 17:42:07.966005 kernel: raid6: int64x2 gen() 3597 MB/s Nov 12 17:42:07.983866 kernel: raid6: int64x1 gen() 2761 MB/s Nov 12 17:42:07.983937 kernel: raid6: using algorithm neonx8 gen() 6708 MB/s Nov 12 17:42:08.001813 kernel: raid6: .... xor() 4875 MB/s, rmw enabled Nov 12 17:42:08.001866 kernel: raid6: using neon recovery algorithm Nov 12 17:42:08.010626 kernel: xor: measuring software checksum speed Nov 12 17:42:08.010681 kernel: 8regs : 10971 MB/sec Nov 12 17:42:08.011816 kernel: 32regs : 11947 MB/sec Nov 12 17:42:08.014031 kernel: arm64_neon : 8986 MB/sec Nov 12 17:42:08.014076 kernel: xor: using function: 32regs (11947 MB/sec) Nov 12 17:42:08.098004 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:42:08.118362 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:42:08.127277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:42:08.178396 systemd-udevd[471]: Using default interface naming scheme 'v255'. Nov 12 17:42:08.186677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:42:08.214312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:42:08.249751 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Nov 12 17:42:08.308188 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:42:08.319239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:42:08.444332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:42:08.453538 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:42:08.500724 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:42:08.505553 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:42:08.506316 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:42:08.506426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:42:08.520442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:42:08.555612 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:42:08.631550 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:42:08.631652 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 12 17:42:08.654304 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 12 17:42:08.654570 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 12 17:42:08.654805 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:64:c4:9a:1f:05 Nov 12 17:42:08.658475 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:08.661628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:42:08.661889 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:42:08.675314 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:42:08.679437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:42:08.680226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:42:08.695489 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 12 17:42:08.695529 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 12 17:42:08.688498 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:42:08.700368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:42:08.720010 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 12 17:42:08.733228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:42:08.744549 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:42:08.744588 kernel: GPT:9289727 != 16777215 Nov 12 17:42:08.744614 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:42:08.744639 kernel: GPT:9289727 != 16777215 Nov 12 17:42:08.744664 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:42:08.744689 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:42:08.749287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:42:08.789129 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:42:08.835028 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (533) Nov 12 17:42:08.874979 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (526) Nov 12 17:42:08.886448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 12 17:42:08.947509 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 12 17:42:08.977794 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 12 17:42:08.982868 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 12 17:42:09.009935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 17:42:09.021265 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:42:09.042559 disk-uuid[662]: Primary Header is updated. Nov 12 17:42:09.042559 disk-uuid[662]: Secondary Entries is updated. Nov 12 17:42:09.042559 disk-uuid[662]: Secondary Header is updated. Nov 12 17:42:09.056162 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:42:09.064003 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:42:09.072988 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:42:10.072075 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:42:10.077068 disk-uuid[663]: The operation has completed successfully. Nov 12 17:42:10.285719 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:42:10.285971 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:42:10.325306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:42:10.334933 sh[1007]: Success Nov 12 17:42:10.361980 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:42:10.471377 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:42:10.489198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:42:10.498633 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:42:10.531105 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:42:10.531171 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:42:10.531210 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:42:10.532857 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:42:10.534118 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:42:10.558986 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 17:42:10.574403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:42:10.576869 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:42:10.594335 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:42:10.605599 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:42:10.625930 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:42:10.626018 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:42:10.626047 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:42:10.637997 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:42:10.655089 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:42:10.658537 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:42:10.668879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:42:10.680295 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:42:10.808536 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:42:10.835331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:42:10.890319 ignition[1102]: Ignition 2.19.0 Nov 12 17:42:10.890353 ignition[1102]: Stage: fetch-offline Nov 12 17:42:10.892090 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:10.893757 systemd-networkd[1202]: lo: Link UP Nov 12 17:42:10.892118 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:10.893764 systemd-networkd[1202]: lo: Gained carrier Nov 12 17:42:10.896309 ignition[1102]: Ignition finished successfully Nov 12 17:42:10.902190 systemd-networkd[1202]: Enumeration completed Nov 12 17:42:10.902360 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:42:10.909122 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:42:10.909471 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:42:10.909527 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:42:10.911968 systemd[1]: Reached target network.target - Network. Nov 12 17:42:10.937394 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 17:42:10.939790 systemd-networkd[1202]: eth0: Link UP Nov 12 17:42:10.939803 systemd-networkd[1202]: eth0: Gained carrier Nov 12 17:42:10.939828 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:42:10.968130 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.27.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 17:42:10.987247 ignition[1209]: Ignition 2.19.0 Nov 12 17:42:10.987820 ignition[1209]: Stage: fetch Nov 12 17:42:10.988534 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:10.988559 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:10.988768 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:11.004003 ignition[1209]: PUT result: OK Nov 12 17:42:11.007382 ignition[1209]: parsed url from cmdline: "" Nov 12 17:42:11.007405 ignition[1209]: no config URL provided Nov 12 17:42:11.007421 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:42:11.007452 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:42:11.007485 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:11.011150 ignition[1209]: PUT result: OK Nov 12 17:42:11.011246 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 12 17:42:11.015370 ignition[1209]: GET result: OK Nov 12 17:42:11.016589 ignition[1209]: parsing config with SHA512: bf5f6d9fdc87d7c585f8ebea17ac4fa74b047c4dce77597c9c6f7b29c5e6961116d611631023013b3d304bffe79e44abfb34953087359652283ba3b0e2d36465 Nov 12 17:42:11.028236 unknown[1209]: fetched base config from "system" Nov 12 17:42:11.028276 unknown[1209]: fetched base config from "system" Nov 12 17:42:11.028292 unknown[1209]: fetched user config from "aws" Nov 12 17:42:11.034793 ignition[1209]: fetch: fetch complete Nov 12 17:42:11.034812 ignition[1209]: fetch: fetch passed Nov 12 17:42:11.034926 ignition[1209]: Ignition finished successfully Nov 12 17:42:11.040341 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 17:42:11.052245 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:42:11.082234 ignition[1216]: Ignition 2.19.0 Nov 12 17:42:11.082775 ignition[1216]: Stage: kargs Nov 12 17:42:11.083448 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:11.083474 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:11.083676 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:11.088160 ignition[1216]: PUT result: OK Nov 12 17:42:11.096709 ignition[1216]: kargs: kargs passed Nov 12 17:42:11.097025 ignition[1216]: Ignition finished successfully Nov 12 17:42:11.104038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:42:11.115186 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:42:11.140711 ignition[1222]: Ignition 2.19.0 Nov 12 17:42:11.140744 ignition[1222]: Stage: disks Nov 12 17:42:11.142106 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:11.142132 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:11.142292 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:11.144213 ignition[1222]: PUT result: OK Nov 12 17:42:11.153670 ignition[1222]: disks: disks passed Nov 12 17:42:11.153781 ignition[1222]: Ignition finished successfully Nov 12 17:42:11.160004 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:42:11.162904 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:42:11.169709 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:42:11.174190 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:42:11.178080 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:42:11.180064 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:42:11.196379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:42:11.238058 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:42:11.242330 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:42:11.261387 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:42:11.349983 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:42:11.351638 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:42:11.355872 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:42:11.374143 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:42:11.380810 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:42:11.386479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:42:11.386569 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:42:11.386622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:42:11.412793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:42:11.423136 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Nov 12 17:42:11.423521 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:42:11.431583 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:42:11.431636 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:42:11.433464 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:42:11.445987 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:42:11.449738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:42:11.543723 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:42:11.554289 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:42:11.562977 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:42:11.571787 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:42:11.720875 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:42:11.731157 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:42:11.745506 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:42:11.761419 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:42:11.764713 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:42:11.802025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:42:11.813107 ignition[1362]: INFO : Ignition 2.19.0 Nov 12 17:42:11.813107 ignition[1362]: INFO : Stage: mount Nov 12 17:42:11.816462 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:11.816462 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:11.816462 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:11.824150 ignition[1362]: INFO : PUT result: OK Nov 12 17:42:11.827468 ignition[1362]: INFO : mount: mount passed Nov 12 17:42:11.829026 ignition[1362]: INFO : Ignition finished successfully Nov 12 17:42:11.834011 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:42:11.843157 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:42:11.874406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:42:11.895993 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) Nov 12 17:42:11.896056 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:42:11.899423 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:42:11.899479 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:42:11.906991 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:42:11.910479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:42:11.946629 ignition[1390]: INFO : Ignition 2.19.0 Nov 12 17:42:11.946629 ignition[1390]: INFO : Stage: files Nov 12 17:42:11.949975 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:11.949975 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:11.949975 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:11.956604 ignition[1390]: INFO : PUT result: OK Nov 12 17:42:11.961575 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:42:11.963995 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:42:11.963995 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:42:11.972102 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:42:11.974870 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:42:11.977894 unknown[1390]: wrote ssh authorized keys file for user: core Nov 12 17:42:11.980177 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:42:11.984789 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 17:42:11.988837 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 17:42:11.988837 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:42:11.988837 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:42:12.104638 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:42:12.240026 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 17:42:12.606125 systemd-networkd[1202]: eth0: Gained IPv6LL Nov 12 17:42:12.736586 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 17:42:13.122251 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:42:13.122251 ignition[1390]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:42:13.131155 ignition[1390]: INFO : files: files passed Nov 12 17:42:13.131155 ignition[1390]: INFO : Ignition finished successfully Nov 12 17:42:13.135774 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:42:13.185550 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:42:13.202216 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:42:13.212537 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:42:13.214460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:42:13.246546 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:42:13.246546 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:42:13.255474 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:42:13.262033 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:42:13.266758 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:42:13.281319 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:42:13.327249 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:42:13.329124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:42:13.333343 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:42:13.337857 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:42:13.340094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:42:13.351237 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:42:13.381385 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:42:13.393275 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:42:13.426531 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:42:13.427851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:42:13.429287 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:42:13.429820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:42:13.430603 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:42:13.431650 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:42:13.431984 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:42:13.432220 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:42:13.432514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:42:13.432823 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:42:13.433447 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:42:13.433753 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:42:13.434389 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:42:13.434819 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:42:13.435745 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:42:13.436254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:42:13.436541 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:42:13.437466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:42:13.437868 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:42:13.438678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:42:13.457917 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:42:13.462640 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:42:13.463160 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:42:13.484401 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:42:13.484751 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:42:13.496013 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:42:13.496320 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:42:13.517396 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:42:13.533358 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:42:13.533750 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:42:13.547555 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:42:13.552116 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:42:13.554501 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:42:13.562070 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:42:13.564165 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:42:13.580305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:42:13.582493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:42:13.591450 ignition[1443]: INFO : Ignition 2.19.0 Nov 12 17:42:13.591450 ignition[1443]: INFO : Stage: umount Nov 12 17:42:13.594730 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:42:13.594730 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:42:13.599004 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:42:13.604118 ignition[1443]: INFO : PUT result: OK Nov 12 17:42:13.609698 ignition[1443]: INFO : umount: umount passed Nov 12 17:42:13.611814 ignition[1443]: INFO : Ignition finished successfully Nov 12 17:42:13.616647 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:42:13.618017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:42:13.623298 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:42:13.623398 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:42:13.625787 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:42:13.625873 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:42:13.628016 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 17:42:13.628100 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 17:42:13.640432 systemd[1]: Stopped target network.target - Network. Nov 12 17:42:13.643093 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:42:13.643424 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:42:13.650840 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:42:13.653088 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:42:13.657113 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:42:13.663531 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:42:13.665435 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:42:13.672017 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:42:13.672120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:42:13.676679 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:42:13.676766 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:42:13.678811 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:42:13.678920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:42:13.682844 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:42:13.683031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:42:13.695328 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:42:13.701717 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:42:13.707386 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:42:13.708575 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:42:13.708798 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:42:13.708908 systemd-networkd[1202]: eth0: DHCPv6 lease lost Nov 12 17:42:13.711878 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:42:13.714051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:42:13.718411 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:42:13.719004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:42:13.729377 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:42:13.729673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:42:13.740091 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:42:13.740201 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:42:13.760279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:42:13.764172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:42:13.764292 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:42:13.767116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:42:13.767203 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:42:13.769584 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:42:13.769665 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:42:13.772707 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:42:13.772785 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:42:13.775869 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:42:13.825264 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:42:13.826376 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:42:13.833126 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:42:13.837051 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:42:13.840397 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:42:13.840538 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:42:13.845865 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:42:13.845962 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:42:13.855093 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:42:13.855194 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:42:13.857544 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:42:13.857631 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:42:13.867292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:42:13.867389 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:42:13.879268 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:42:13.883474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:42:13.883623 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:42:13.886541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:42:13.886633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:42:13.923676 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:42:13.923880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:42:13.928751 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:42:13.948318 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:42:13.979293 systemd[1]: Switching root. Nov 12 17:42:14.018479 systemd-journald[250]: Journal stopped Nov 12 17:42:16.104883 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Nov 12 17:42:16.105065 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:42:16.105109 kernel: SELinux: policy capability open_perms=1 Nov 12 17:42:16.105141 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:42:16.105171 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:42:16.105211 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:42:16.105247 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:42:16.105276 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:42:16.105307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:42:16.105346 kernel: audit: type=1403 audit(1731433334.628:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:42:16.105378 systemd[1]: Successfully loaded SELinux policy in 50.648ms. Nov 12 17:42:16.105424 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.711ms. Nov 12 17:42:16.105458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:42:16.105490 systemd[1]: Detected virtualization amazon. Nov 12 17:42:16.105520 systemd[1]: Detected architecture arm64. Nov 12 17:42:16.105555 systemd[1]: Detected first boot. Nov 12 17:42:16.105595 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:42:16.105629 zram_generator::config[1506]: No configuration found. Nov 12 17:42:16.105665 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:42:16.105698 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:42:16.105731 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 12 17:42:16.105765 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:42:16.105800 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:42:16.105835 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:42:16.105882 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:42:16.105912 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:42:16.105981 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:42:16.106025 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:42:16.106058 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:42:16.106093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:42:16.106123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:42:16.106153 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:42:16.106187 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:42:16.106217 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:42:16.106252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:42:16.106281 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 17:42:16.106311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:42:16.106342 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:42:16.106373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:42:16.106405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:42:16.106440 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:42:16.106472 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:42:16.106504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:42:16.106537 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:42:16.106570 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:42:16.106603 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:42:16.106632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:42:16.106663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:42:16.106698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:42:16.106732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:42:16.106762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:42:16.106791 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:42:16.106823 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:42:16.106855 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:42:16.106889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:42:16.106918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:42:16.106971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:42:16.107011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:42:16.107042 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:42:16.107072 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:42:16.107102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:42:16.107135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:42:16.107165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:42:16.107194 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:42:16.107223 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:42:16.107253 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:42:16.107288 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 17:42:16.107323 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 17:42:16.107354 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:42:16.107385 kernel: loop: module loaded Nov 12 17:42:16.107416 kernel: fuse: init (API version 7.39) Nov 12 17:42:16.107444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:42:16.107474 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:42:16.107505 kernel: ACPI: bus type drm_connector registered Nov 12 17:42:16.107551 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:42:16.107589 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:42:16.107621 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:42:16.107697 systemd-journald[1602]: Collecting audit messages is disabled. Nov 12 17:42:16.107751 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:42:16.107781 systemd-journald[1602]: Journal started Nov 12 17:42:16.107830 systemd-journald[1602]: Runtime Journal (/run/log/journal/ec299f50e08c26e7b917b67356b2701d) is 8.0M, max 75.3M, 67.3M free. Nov 12 17:42:16.114547 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:42:16.116700 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:42:16.121414 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:42:16.123826 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:42:16.126351 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:42:16.129006 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:42:16.135558 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:42:16.135940 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:42:16.138972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:42:16.139315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:42:16.142793 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:42:16.143203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:42:16.149776 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:42:16.150211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:42:16.153602 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:42:16.153971 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:42:16.156701 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:42:16.157203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:42:16.160105 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:42:16.165936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:42:16.176801 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:42:16.181869 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:42:16.208624 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:42:16.221283 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:42:16.226297 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:42:16.229127 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:42:16.246300 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:42:16.269519 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:42:16.271883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:42:16.275192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:42:16.280228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:42:16.294112 systemd-journald[1602]: Time spent on flushing to /var/log/journal/ec299f50e08c26e7b917b67356b2701d is 61.967ms for 891 entries. Nov 12 17:42:16.294112 systemd-journald[1602]: System Journal (/var/log/journal/ec299f50e08c26e7b917b67356b2701d) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:42:16.367720 systemd-journald[1602]: Received client request to flush runtime journal. Nov 12 17:42:16.299274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:42:16.314616 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:42:16.326749 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:42:16.331178 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:42:16.378709 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:42:16.384506 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:42:16.389497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:42:16.414805 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Nov 12 17:42:16.417058 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Nov 12 17:42:16.429587 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:42:16.447253 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:42:16.450993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:42:16.485785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:42:16.500263 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:42:16.531420 udevadm[1674]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 17:42:16.561550 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:42:16.575278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:42:16.606217 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Nov 12 17:42:16.606756 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Nov 12 17:42:16.618713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:42:17.328414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:42:17.340313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:42:17.395070 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Nov 12 17:42:17.437145 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:42:17.452233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:42:17.483311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:42:17.575904 (udev-worker)[1691]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:17.578870 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 17:42:17.612137 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1698) Nov 12 17:42:17.622505 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:42:17.661999 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1698) Nov 12 17:42:17.797523 systemd-networkd[1687]: lo: Link UP Nov 12 17:42:17.797543 systemd-networkd[1687]: lo: Gained carrier Nov 12 17:42:17.800723 systemd-networkd[1687]: Enumeration completed Nov 12 17:42:17.801156 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:42:17.803810 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:42:17.803833 systemd-networkd[1687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:42:17.806554 systemd-networkd[1687]: eth0: Link UP Nov 12 17:42:17.806869 systemd-networkd[1687]: eth0: Gained carrier Nov 12 17:42:17.806901 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:42:17.810194 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1699) Nov 12 17:42:17.812479 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:42:17.827121 systemd-networkd[1687]: eth0: DHCPv4 address 172.31.27.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 17:42:17.976392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:42:18.082018 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:42:18.129678 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 17:42:18.133068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:42:18.142283 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:42:18.170038 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:42:18.210690 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:42:18.214488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:42:18.227224 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:42:18.237763 lvm[1815]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:42:18.276650 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:42:18.279387 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:42:18.281882 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:42:18.281938 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:42:18.284052 systemd[1]: Reached target machines.target - Containers. Nov 12 17:42:18.287961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:42:18.297315 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:42:18.308299 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:42:18.311491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:42:18.317717 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:42:18.325943 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:42:18.343302 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:42:18.352271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:42:18.382851 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:42:18.384818 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:42:18.396232 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:42:18.413996 kernel: loop0: detected capacity change from 0 to 114328 Nov 12 17:42:18.456743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:42:18.496011 kernel: loop1: detected capacity change from 0 to 194512 Nov 12 17:42:18.553990 kernel: loop2: detected capacity change from 0 to 52536 Nov 12 17:42:18.603981 kernel: loop3: detected capacity change from 0 to 114432 Nov 12 17:42:18.657033 kernel: loop4: detected capacity change from 0 to 114328 Nov 12 17:42:18.681977 kernel: loop5: detected capacity change from 0 to 194512 Nov 12 17:42:18.710009 kernel: loop6: detected capacity change from 0 to 52536 Nov 12 17:42:18.735000 kernel: loop7: detected capacity change from 0 to 114432 Nov 12 17:42:18.751571 (sd-merge)[1836]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 12 17:42:18.752579 (sd-merge)[1836]: Merged extensions into '/usr'. Nov 12 17:42:18.759623 systemd[1]: Reloading requested from client PID 1823 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:42:18.759655 systemd[1]: Reloading... Nov 12 17:42:18.913080 zram_generator::config[1866]: No configuration found. Nov 12 17:42:19.054784 ldconfig[1819]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:42:19.070077 systemd-networkd[1687]: eth0: Gained IPv6LL Nov 12 17:42:19.185196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:42:19.326803 systemd[1]: Reloading finished in 566 ms. Nov 12 17:42:19.358228 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:42:19.362121 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:42:19.365525 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:42:19.381387 systemd[1]: Starting ensure-sysext.service... Nov 12 17:42:19.388538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:42:19.409110 systemd[1]: Reloading requested from client PID 1927 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:42:19.409155 systemd[1]: Reloading... Nov 12 17:42:19.450727 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:42:19.452312 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:42:19.454383 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:42:19.455099 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Nov 12 17:42:19.455368 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Nov 12 17:42:19.463250 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:42:19.463467 systemd-tmpfiles[1928]: Skipping /boot Nov 12 17:42:19.486735 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:42:19.486909 systemd-tmpfiles[1928]: Skipping /boot Nov 12 17:42:19.581011 zram_generator::config[1959]: No configuration found. Nov 12 17:42:19.810769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:42:19.952759 systemd[1]: Reloading finished in 542 ms. Nov 12 17:42:19.983200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:42:20.000249 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:42:20.019215 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:42:20.026261 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:42:20.043532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:42:20.050037 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:42:20.075772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:42:20.082117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:42:20.090806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:42:20.099156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:42:20.101499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:42:20.108362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:42:20.109679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:42:20.119568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:42:20.139275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:42:20.142930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:42:20.145628 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:42:20.158874 systemd[1]: Finished ensure-sysext.service. Nov 12 17:42:20.180711 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:42:20.181127 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:42:20.192875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:42:20.195450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:42:20.203629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:42:20.206323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:42:20.222175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:42:20.228095 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:42:20.231447 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:42:20.231850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:42:20.253088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:42:20.253211 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:42:20.265292 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:42:20.294474 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:42:20.301537 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:42:20.305470 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:42:20.316987 augenrules[2060]: No rules Nov 12 17:42:20.320720 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:42:20.375303 systemd-resolved[2019]: Positive Trust Anchors: Nov 12 17:42:20.375343 systemd-resolved[2019]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:42:20.375407 systemd-resolved[2019]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:42:20.384461 systemd-resolved[2019]: Defaulting to hostname 'linux'. Nov 12 17:42:20.387767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:42:20.390527 systemd[1]: Reached target network.target - Network. Nov 12 17:42:20.392218 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:42:20.394267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:42:20.396535 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:42:20.398707 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:42:20.401134 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:42:20.403849 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:42:20.406205 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:42:20.408812 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:42:20.411524 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:42:20.411578 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:42:20.414049 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:42:20.417287 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:42:20.422397 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:42:20.427256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:42:20.433804 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:42:20.436267 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:42:20.438138 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:42:20.440312 systemd[1]: System is tainted: cgroupsv1 Nov 12 17:42:20.440381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:42:20.440426 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:42:20.447183 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:42:20.467247 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 17:42:20.474295 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:42:20.489458 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:42:20.505818 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:42:20.508264 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:42:20.520153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:42:20.527135 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:42:20.534715 jq[2073]: false Nov 12 17:42:20.547176 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 17:42:20.568276 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:42:20.592976 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:42:20.616153 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 17:42:20.639832 dbus-daemon[2072]: [system] SELinux support is enabled Nov 12 17:42:20.643162 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:42:20.657378 extend-filesystems[2074]: Found loop4 Nov 12 17:42:20.657378 extend-filesystems[2074]: Found loop5 Nov 12 17:42:20.657378 extend-filesystems[2074]: Found loop6 Nov 12 17:42:20.657378 extend-filesystems[2074]: Found loop7 Nov 12 17:42:20.657378 extend-filesystems[2074]: Found nvme0n1 Nov 12 17:42:20.690114 extend-filesystems[2074]: Found nvme0n1p2 Nov 12 17:42:20.690114 extend-filesystems[2074]: Found nvme0n1p3 Nov 12 17:42:20.690114 extend-filesystems[2074]: Found usr Nov 12 17:42:20.690114 extend-filesystems[2074]: Found nvme0n1p4 Nov 12 17:42:20.690114 extend-filesystems[2074]: Found nvme0n1p6 Nov 12 17:42:20.690114 extend-filesystems[2074]: Found nvme0n1p7 Nov 12 17:42:20.690114 extend-filesystems[2074]: Found nvme0n1p9 Nov 12 17:42:20.690114 extend-filesystems[2074]: Checking size of /dev/nvme0n1p9 Nov 12 17:42:20.660230 dbus-daemon[2072]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1687 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 17:42:20.664521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:49:27 UTC 2024 (1): Starting Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: ---------------------------------------------------- Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: corporation. Support and training for ntp-4 are Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: available at https://www.nwtime.org/support Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: ---------------------------------------------------- Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: proto: precision = 0.096 usec (-23) Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: basedate set to 2024-10-31 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: gps base set to 2024-11-03 (week 2339) Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listen normally on 3 eth0 172.31.27.95:123 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listen normally on 4 lo [::1]:123 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listen normally on 5 eth0 [fe80::464:c4ff:fe9a:1f05%2]:123 Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: Listening on routing socket on fd #22 for interface updates Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:42:20.745578 ntpd[2080]: 12 Nov 17:42:20 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:42:20.718729 ntpd[2080]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:49:27 UTC 2024 (1): Starting Nov 12 17:42:20.684877 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:42:20.780409 coreos-metadata[2070]: Nov 12 17:42:20.769 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 17:42:20.780409 coreos-metadata[2070]: Nov 12 17:42:20.769 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 12 17:42:20.780409 coreos-metadata[2070]: Nov 12 17:42:20.774 INFO Fetch successful Nov 12 17:42:20.780409 coreos-metadata[2070]: Nov 12 17:42:20.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 12 17:42:20.780409 coreos-metadata[2070]: Nov 12 17:42:20.776 INFO Fetch successful Nov 12 17:42:20.780409 coreos-metadata[2070]: Nov 12 17:42:20.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 12 17:42:20.718777 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 17:42:20.692478 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:42:20.807066 coreos-metadata[2070]: Nov 12 17:42:20.780 INFO Fetch successful Nov 12 17:42:20.807066 coreos-metadata[2070]: Nov 12 17:42:20.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 12 17:42:20.718797 ntpd[2080]: ---------------------------------------------------- Nov 12 17:42:20.711250 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:42:20.718816 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Nov 12 17:42:20.781193 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:42:20.718834 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 17:42:20.787153 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:42:20.818092 coreos-metadata[2070]: Nov 12 17:42:20.814 INFO Fetch successful Nov 12 17:42:20.818092 coreos-metadata[2070]: Nov 12 17:42:20.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 12 17:42:20.818092 coreos-metadata[2070]: Nov 12 17:42:20.814 INFO Fetch failed with 404: resource not found Nov 12 17:42:20.818092 coreos-metadata[2070]: Nov 12 17:42:20.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 12 17:42:20.818344 extend-filesystems[2074]: Resized partition /dev/nvme0n1p9 Nov 12 17:42:20.718852 ntpd[2080]: corporation. Support and training for ntp-4 are Nov 12 17:42:20.799346 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:42:20.825577 coreos-metadata[2070]: Nov 12 17:42:20.822 INFO Fetch successful Nov 12 17:42:20.825577 coreos-metadata[2070]: Nov 12 17:42:20.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 12 17:42:20.718870 ntpd[2080]: available at https://www.nwtime.org/support Nov 12 17:42:20.799922 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:42:20.718888 ntpd[2080]: ---------------------------------------------------- Nov 12 17:42:20.810889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:42:20.727036 ntpd[2080]: proto: precision = 0.096 usec (-23) Nov 12 17:42:20.728093 ntpd[2080]: basedate set to 2024-10-31 Nov 12 17:42:20.728129 ntpd[2080]: gps base set to 2024-11-03 (week 2339) Nov 12 17:42:20.732747 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 17:42:20.732825 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 17:42:20.734355 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 17:42:20.734422 ntpd[2080]: Listen normally on 3 eth0 172.31.27.95:123 Nov 12 17:42:20.734488 ntpd[2080]: Listen normally on 4 lo [::1]:123 Nov 12 17:42:20.734562 ntpd[2080]: Listen normally on 5 eth0 [fe80::464:c4ff:fe9a:1f05%2]:123 Nov 12 17:42:20.734625 ntpd[2080]: Listening on routing socket on fd #22 for interface updates Nov 12 17:42:20.741092 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:42:20.741145 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:42:20.840174 extend-filesystems[2119]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:42:20.843260 coreos-metadata[2070]: Nov 12 17:42:20.840 INFO Fetch successful Nov 12 17:42:20.843260 coreos-metadata[2070]: Nov 12 17:42:20.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 12 17:42:20.843260 coreos-metadata[2070]: Nov 12 17:42:20.842 INFO Fetch successful Nov 12 17:42:20.843429 jq[2109]: true Nov 12 17:42:20.852984 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Nov 12 17:42:20.843889 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:42:20.844423 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:42:20.849636 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:42:20.854287 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:42:20.859206 coreos-metadata[2070]: Nov 12 17:42:20.858 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 12 17:42:20.865627 coreos-metadata[2070]: Nov 12 17:42:20.865 INFO Fetch successful Nov 12 17:42:20.865627 coreos-metadata[2070]: Nov 12 17:42:20.865 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 12 17:42:20.877710 coreos-metadata[2070]: Nov 12 17:42:20.877 INFO Fetch successful Nov 12 17:42:20.913585 update_engine[2099]: I20241112 17:42:20.913275 2099 main.cc:92] Flatcar Update Engine starting Nov 12 17:42:20.923731 update_engine[2099]: I20241112 17:42:20.917896 2099 update_check_scheduler.cc:74] Next update check in 4m0s Nov 12 17:42:20.976620 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Nov 12 17:42:20.976642 (ntainerd)[2129]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:42:20.987872 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:42:21.009999 jq[2123]: true Nov 12 17:42:21.009200 dbus-daemon[2072]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 17:42:20.988940 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:42:21.036793 extend-filesystems[2119]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 12 17:42:21.036793 extend-filesystems[2119]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:42:21.036793 extend-filesystems[2119]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Nov 12 17:42:20.992183 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:42:21.072638 tar[2118]: linux-arm64/helm Nov 12 17:42:21.073095 extend-filesystems[2074]: Resized filesystem in /dev/nvme0n1p9 Nov 12 17:42:21.073095 extend-filesystems[2074]: Found nvme0n1p1 Nov 12 17:42:20.992226 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:42:21.001087 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:42:21.016074 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:42:21.020033 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:42:21.022929 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:42:21.025576 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:42:21.110733 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 17:42:21.146690 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 12 17:42:21.159598 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 17:42:21.168038 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 17:42:21.174864 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:42:21.327739 bash[2184]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:42:21.344238 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:42:21.360410 systemd[1]: Starting sshkeys.service... Nov 12 17:42:21.394721 amazon-ssm-agent[2160]: Initializing new seelog logger Nov 12 17:42:21.399369 amazon-ssm-agent[2160]: New Seelog Logger Creation Complete Nov 12 17:42:21.402313 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.402313 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.402313 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 processing appconfig overrides Nov 12 17:42:21.406011 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.406011 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.406011 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 processing appconfig overrides Nov 12 17:42:21.406011 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.406011 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.406011 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 processing appconfig overrides Nov 12 17:42:21.406635 systemd-logind[2095]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:42:21.411233 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO Proxy environment variables: Nov 12 17:42:21.406689 systemd-logind[2095]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 12 17:42:21.411467 systemd-logind[2095]: New seat seat0. Nov 12 17:42:21.424208 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.424208 amazon-ssm-agent[2160]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:42:21.424208 amazon-ssm-agent[2160]: 2024/11/12 17:42:21 processing appconfig overrides Nov 12 17:42:21.424834 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:42:21.451330 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 17:42:21.516985 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO https_proxy: Nov 12 17:42:21.568038 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2191) Nov 12 17:42:21.568672 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 17:42:21.616294 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO http_proxy: Nov 12 17:42:21.716479 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO no_proxy: Nov 12 17:42:21.819971 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO Checking if agent identity type OnPrem can be assumed Nov 12 17:42:21.844725 coreos-metadata[2197]: Nov 12 17:42:21.844 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 17:42:21.847076 coreos-metadata[2197]: Nov 12 17:42:21.846 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 12 17:42:21.850183 coreos-metadata[2197]: Nov 12 17:42:21.847 INFO Fetch successful Nov 12 17:42:21.850183 coreos-metadata[2197]: Nov 12 17:42:21.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 12 17:42:21.851965 coreos-metadata[2197]: Nov 12 17:42:21.851 INFO Fetch successful Nov 12 17:42:21.867154 unknown[2197]: wrote ssh authorized keys file for user: core Nov 12 17:42:21.919230 dbus-daemon[2072]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 17:42:21.929796 dbus-daemon[2072]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2162 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 17:42:21.937421 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 17:42:21.939938 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO Checking if agent identity type EC2 can be assumed Nov 12 17:42:21.953240 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 17:42:21.979917 locksmithd[2145]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:42:21.990658 polkitd[2256]: Started polkitd version 121 Nov 12 17:42:21.994504 update-ssh-keys[2239]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:42:22.000607 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 17:42:22.025252 systemd[1]: Finished sshkeys.service. Nov 12 17:42:22.032282 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO Agent will take identity from EC2 Nov 12 17:42:22.040516 polkitd[2256]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 17:42:22.040629 polkitd[2256]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 17:42:22.047563 polkitd[2256]: Finished loading, compiling and executing 2 rules Nov 12 17:42:22.048429 dbus-daemon[2072]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 17:42:22.049563 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 17:42:22.052253 polkitd[2256]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 17:42:22.117224 systemd-resolved[2019]: System hostname changed to 'ip-172-31-27-95'. Nov 12 17:42:22.117228 systemd-hostnamed[2162]: Hostname set to (transient) Nov 12 17:42:22.136981 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:42:22.181857 containerd[2129]: time="2024-11-12T17:42:22.181697457Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:42:22.233930 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:42:22.332931 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:42:22.401684 containerd[2129]: time="2024-11-12T17:42:22.400853938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.425832995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.425909795Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.425973299Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426284471Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426320387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426438623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426467879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426855611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426888767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426918551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:42:22.433542 containerd[2129]: time="2024-11-12T17:42:22.426942959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.434094 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 12 17:42:22.434636 containerd[2129]: time="2024-11-12T17:42:22.427349627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.434636 containerd[2129]: time="2024-11-12T17:42:22.427763951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:42:22.441374 containerd[2129]: time="2024-11-12T17:42:22.435144995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:42:22.441374 containerd[2129]: time="2024-11-12T17:42:22.435198551Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:42:22.441374 containerd[2129]: time="2024-11-12T17:42:22.435410339Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:42:22.441374 containerd[2129]: time="2024-11-12T17:42:22.435542075Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.448449383Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.448558787Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.448594991Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.448715411Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.448749203Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.449045591Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.449616047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.449813639Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.449857991Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.449889023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:42:22.450028 containerd[2129]: time="2024-11-12T17:42:22.449921099Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463392803Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463460807Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463516739Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463553939Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463585163Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463620707Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463653875Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463699511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463733723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463764263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463796651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463835939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463874783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.466982 containerd[2129]: time="2024-11-12T17:42:22.463918487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.463972619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464008739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464045579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464074943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464106623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464139023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464175563Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464222963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464252651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464279495Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464402075Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464441351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:42:22.467684 containerd[2129]: time="2024-11-12T17:42:22.464471483Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:42:22.468346 containerd[2129]: time="2024-11-12T17:42:22.464501591Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:42:22.468346 containerd[2129]: time="2024-11-12T17:42:22.464525435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.468346 containerd[2129]: time="2024-11-12T17:42:22.464565443Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:42:22.468346 containerd[2129]: time="2024-11-12T17:42:22.464590007Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:42:22.468346 containerd[2129]: time="2024-11-12T17:42:22.464615291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.482360423Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.482510279Z" level=info msg="Connect containerd service" Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.482583683Z" level=info msg="using legacy CRI server" Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.482603531Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.482769335Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.483837839Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:42:22.493061 containerd[2129]: time="2024-11-12T17:42:22.484162931Z" level=info msg="Start subscribing containerd event" Nov 12 17:42:22.510427 containerd[2129]: time="2024-11-12T17:42:22.507532511Z" level=info msg="Start recovering state" Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526366811Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526502927Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526569059Z" level=info msg="Start event monitor" Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526597883Z" level=info msg="Start snapshots syncer" Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526620491Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526640039Z" level=info msg="Start streaming server" Nov 12 17:42:22.531253 containerd[2129]: time="2024-11-12T17:42:22.526774919Z" level=info msg="containerd successfully booted in 0.352439s" Nov 12 17:42:22.528170 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:42:22.536985 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 12 17:42:22.633968 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] Starting Core Agent Nov 12 17:42:22.734129 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 12 17:42:22.837149 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [Registrar] Starting registrar module Nov 12 17:42:22.940046 amazon-ssm-agent[2160]: 2024-11-12 17:42:21 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 12 17:42:23.371571 tar[2118]: linux-arm64/LICENSE Nov 12 17:42:23.376376 tar[2118]: linux-arm64/README.md Nov 12 17:42:23.420734 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:42:23.448190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:23.462523 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:42:24.143163 amazon-ssm-agent[2160]: 2024-11-12 17:42:24 INFO [EC2Identity] EC2 registration was successful. Nov 12 17:42:24.169253 sshd_keygen[2110]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:42:24.181000 amazon-ssm-agent[2160]: 2024-11-12 17:42:24 INFO [CredentialRefresher] credentialRefresher has started Nov 12 17:42:24.181000 amazon-ssm-agent[2160]: 2024-11-12 17:42:24 INFO [CredentialRefresher] Starting credentials refresher loop Nov 12 17:42:24.181000 amazon-ssm-agent[2160]: 2024-11-12 17:42:24 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 12 17:42:24.218023 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:42:24.231594 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:42:24.246233 amazon-ssm-agent[2160]: 2024-11-12 17:42:24 INFO [CredentialRefresher] Next credential rotation will be in 30.091659739566666 minutes Nov 12 17:42:24.256186 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:42:24.256730 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:42:24.270543 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:42:24.301682 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:42:24.315597 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:42:24.328796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 17:42:24.334334 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:42:24.337941 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:42:24.340203 systemd[1]: Startup finished in 8.981s (kernel) + 9.759s (userspace) = 18.741s. Nov 12 17:42:24.726195 kubelet[2344]: E1112 17:42:24.726075 2344 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:42:24.730356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:42:24.730735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:42:25.206629 amazon-ssm-agent[2160]: 2024-11-12 17:42:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 12 17:42:25.307053 amazon-ssm-agent[2160]: 2024-11-12 17:42:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2378) started Nov 12 17:42:25.407930 amazon-ssm-agent[2160]: 2024-11-12 17:42:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 12 17:42:28.066334 systemd-resolved[2019]: Clock change detected. Flushing caches. Nov 12 17:42:29.791768 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:42:29.801026 systemd[1]: Started sshd@0-172.31.27.95:22-139.178.89.65:49508.service - OpenSSH per-connection server daemon (139.178.89.65:49508). Nov 12 17:42:29.985211 sshd[2388]: Accepted publickey for core from 139.178.89.65 port 49508 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:29.988763 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:30.004563 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:42:30.015949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:42:30.022535 systemd-logind[2095]: New session 1 of user core. Nov 12 17:42:30.040987 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:42:30.054093 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:42:30.066064 (systemd)[2394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:42:30.284583 systemd[2394]: Queued start job for default target default.target. Nov 12 17:42:30.285322 systemd[2394]: Created slice app.slice - User Application Slice. Nov 12 17:42:30.285377 systemd[2394]: Reached target paths.target - Paths. Nov 12 17:42:30.285409 systemd[2394]: Reached target timers.target - Timers. Nov 12 17:42:30.301676 systemd[2394]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:42:30.316448 systemd[2394]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:42:30.316597 systemd[2394]: Reached target sockets.target - Sockets. Nov 12 17:42:30.316631 systemd[2394]: Reached target basic.target - Basic System. Nov 12 17:42:30.316732 systemd[2394]: Reached target default.target - Main User Target. Nov 12 17:42:30.316801 systemd[2394]: Startup finished in 239ms. Nov 12 17:42:30.316874 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:42:30.324100 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:42:30.473254 systemd[1]: Started sshd@1-172.31.27.95:22-139.178.89.65:49524.service - OpenSSH per-connection server daemon (139.178.89.65:49524). Nov 12 17:42:30.640090 sshd[2406]: Accepted publickey for core from 139.178.89.65 port 49524 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:30.642873 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:30.651975 systemd-logind[2095]: New session 2 of user core. Nov 12 17:42:30.658185 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:42:30.786060 sshd[2406]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:30.793064 systemd[1]: sshd@1-172.31.27.95:22-139.178.89.65:49524.service: Deactivated successfully. Nov 12 17:42:30.799294 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:42:30.801362 systemd-logind[2095]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:42:30.803066 systemd-logind[2095]: Removed session 2. Nov 12 17:42:30.821033 systemd[1]: Started sshd@2-172.31.27.95:22-139.178.89.65:49528.service - OpenSSH per-connection server daemon (139.178.89.65:49528). Nov 12 17:42:30.984734 sshd[2414]: Accepted publickey for core from 139.178.89.65 port 49528 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:30.987652 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:30.995321 systemd-logind[2095]: New session 3 of user core. Nov 12 17:42:31.007998 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:42:31.128807 sshd[2414]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:31.135912 systemd-logind[2095]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:42:31.136295 systemd[1]: sshd@2-172.31.27.95:22-139.178.89.65:49528.service: Deactivated successfully. Nov 12 17:42:31.141761 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:42:31.143910 systemd-logind[2095]: Removed session 3. Nov 12 17:42:31.160007 systemd[1]: Started sshd@3-172.31.27.95:22-139.178.89.65:49532.service - OpenSSH per-connection server daemon (139.178.89.65:49532). Nov 12 17:42:31.330745 sshd[2422]: Accepted publickey for core from 139.178.89.65 port 49532 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:31.332686 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:31.341305 systemd-logind[2095]: New session 4 of user core. Nov 12 17:42:31.348033 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:42:31.476157 sshd[2422]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:31.482680 systemd[1]: sshd@3-172.31.27.95:22-139.178.89.65:49532.service: Deactivated successfully. Nov 12 17:42:31.482944 systemd-logind[2095]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:42:31.488455 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:42:31.490295 systemd-logind[2095]: Removed session 4. Nov 12 17:42:31.509013 systemd[1]: Started sshd@4-172.31.27.95:22-139.178.89.65:49536.service - OpenSSH per-connection server daemon (139.178.89.65:49536). Nov 12 17:42:31.671477 sshd[2430]: Accepted publickey for core from 139.178.89.65 port 49536 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:31.673564 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:31.682446 systemd-logind[2095]: New session 5 of user core. Nov 12 17:42:31.687058 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:42:31.803876 sudo[2434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:42:31.804487 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:42:31.821625 sudo[2434]: pam_unix(sudo:session): session closed for user root Nov 12 17:42:31.844489 sshd[2430]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:31.852709 systemd[1]: sshd@4-172.31.27.95:22-139.178.89.65:49536.service: Deactivated successfully. Nov 12 17:42:31.857939 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:42:31.859703 systemd-logind[2095]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:42:31.861373 systemd-logind[2095]: Removed session 5. Nov 12 17:42:31.875100 systemd[1]: Started sshd@5-172.31.27.95:22-139.178.89.65:49546.service - OpenSSH per-connection server daemon (139.178.89.65:49546). Nov 12 17:42:32.053087 sshd[2439]: Accepted publickey for core from 139.178.89.65 port 49546 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:32.056083 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:32.065793 systemd-logind[2095]: New session 6 of user core. Nov 12 17:42:32.071164 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:42:32.179844 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:42:32.181142 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:42:32.187466 sudo[2444]: pam_unix(sudo:session): session closed for user root Nov 12 17:42:32.197472 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:42:32.198195 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:42:32.220060 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:42:32.238389 auditctl[2447]: No rules Nov 12 17:42:32.239256 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:42:32.239817 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:42:32.255471 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:42:32.295350 augenrules[2466]: No rules Nov 12 17:42:32.298844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:42:32.302086 sudo[2443]: pam_unix(sudo:session): session closed for user root Nov 12 17:42:32.326065 sshd[2439]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:32.332047 systemd-logind[2095]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:42:32.334778 systemd[1]: sshd@5-172.31.27.95:22-139.178.89.65:49546.service: Deactivated successfully. Nov 12 17:42:32.338973 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:42:32.340821 systemd-logind[2095]: Removed session 6. Nov 12 17:42:32.356031 systemd[1]: Started sshd@6-172.31.27.95:22-139.178.89.65:49550.service - OpenSSH per-connection server daemon (139.178.89.65:49550). Nov 12 17:42:32.536004 sshd[2475]: Accepted publickey for core from 139.178.89.65 port 49550 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:32.538491 sshd[2475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:32.545890 systemd-logind[2095]: New session 7 of user core. Nov 12 17:42:32.557145 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:42:32.665302 sudo[2479]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:42:32.666509 sudo[2479]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:42:33.094338 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:42:33.094847 (dockerd)[2495]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:42:33.445760 dockerd[2495]: time="2024-11-12T17:42:33.445574822Z" level=info msg="Starting up" Nov 12 17:42:33.797444 systemd[1]: var-lib-docker-metacopy\x2dcheck2345920498-merged.mount: Deactivated successfully. Nov 12 17:42:33.814931 dockerd[2495]: time="2024-11-12T17:42:33.814878988Z" level=info msg="Loading containers: start." Nov 12 17:42:33.965548 kernel: Initializing XFRM netlink socket Nov 12 17:42:33.999619 (udev-worker)[2517]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:34.094156 systemd-networkd[1687]: docker0: Link UP Nov 12 17:42:34.117121 dockerd[2495]: time="2024-11-12T17:42:34.116870366Z" level=info msg="Loading containers: done." Nov 12 17:42:34.140028 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3299380280-merged.mount: Deactivated successfully. Nov 12 17:42:34.144250 dockerd[2495]: time="2024-11-12T17:42:34.144173810Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:42:34.144409 dockerd[2495]: time="2024-11-12T17:42:34.144330938Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:42:34.144657 dockerd[2495]: time="2024-11-12T17:42:34.144619838Z" level=info msg="Daemon has completed initialization" Nov 12 17:42:34.204634 dockerd[2495]: time="2024-11-12T17:42:34.201485942Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:42:34.204025 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:42:35.325931 containerd[2129]: time="2024-11-12T17:42:35.325858852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 17:42:35.327679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:42:35.336463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:42:35.649837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:35.666156 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:42:35.753901 kubelet[2654]: E1112 17:42:35.753712 2654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:42:35.763456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:42:35.763895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:42:36.017177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824987820.mount: Deactivated successfully. Nov 12 17:42:37.754083 containerd[2129]: time="2024-11-12T17:42:37.753857192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:37.756101 containerd[2129]: time="2024-11-12T17:42:37.756034688Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201615" Nov 12 17:42:37.757025 containerd[2129]: time="2024-11-12T17:42:37.756484076Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:37.762638 containerd[2129]: time="2024-11-12T17:42:37.762548096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:37.765269 containerd[2129]: time="2024-11-12T17:42:37.764942288Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 2.439012744s" Nov 12 17:42:37.765269 containerd[2129]: time="2024-11-12T17:42:37.765020672Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 17:42:37.803879 containerd[2129]: time="2024-11-12T17:42:37.803809484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 17:42:40.061586 containerd[2129]: time="2024-11-12T17:42:40.060993391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:40.063243 containerd[2129]: time="2024-11-12T17:42:40.063174811Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381044" Nov 12 17:42:40.064471 containerd[2129]: time="2024-11-12T17:42:40.064396003Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:40.070593 containerd[2129]: time="2024-11-12T17:42:40.070488523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:40.072900 containerd[2129]: time="2024-11-12T17:42:40.072720943Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 2.268842195s" Nov 12 17:42:40.072900 containerd[2129]: time="2024-11-12T17:42:40.072779839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 17:42:40.117196 containerd[2129]: time="2024-11-12T17:42:40.117130615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 17:42:41.576870 containerd[2129]: time="2024-11-12T17:42:41.576792239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:41.579973 containerd[2129]: time="2024-11-12T17:42:41.579899639Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770288" Nov 12 17:42:41.581212 containerd[2129]: time="2024-11-12T17:42:41.581141507Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:41.586866 containerd[2129]: time="2024-11-12T17:42:41.586808639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:41.589819 containerd[2129]: time="2024-11-12T17:42:41.589175495Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 1.471978412s" Nov 12 17:42:41.589819 containerd[2129]: time="2024-11-12T17:42:41.589239395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 17:42:41.626378 containerd[2129]: time="2024-11-12T17:42:41.626257655Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 17:42:42.968555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576166810.mount: Deactivated successfully. Nov 12 17:42:43.475712 containerd[2129]: time="2024-11-12T17:42:43.475636032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:43.477164 containerd[2129]: time="2024-11-12T17:42:43.477090312Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272229" Nov 12 17:42:43.478253 containerd[2129]: time="2024-11-12T17:42:43.478183572Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:43.481733 containerd[2129]: time="2024-11-12T17:42:43.481666776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:43.484055 containerd[2129]: time="2024-11-12T17:42:43.483369792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 1.856784165s" Nov 12 17:42:43.484055 containerd[2129]: time="2024-11-12T17:42:43.483427608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 17:42:43.519657 containerd[2129]: time="2024-11-12T17:42:43.519588876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:42:44.084079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840217199.mount: Deactivated successfully. Nov 12 17:42:45.463860 containerd[2129]: time="2024-11-12T17:42:45.462716702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:45.492963 containerd[2129]: time="2024-11-12T17:42:45.492895634Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Nov 12 17:42:45.525810 containerd[2129]: time="2024-11-12T17:42:45.525726302Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:45.568803 containerd[2129]: time="2024-11-12T17:42:45.568699850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:45.571162 containerd[2129]: time="2024-11-12T17:42:45.571109558Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.051456518s" Nov 12 17:42:45.571463 containerd[2129]: time="2024-11-12T17:42:45.571320842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:42:45.609152 containerd[2129]: time="2024-11-12T17:42:45.609022107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:42:46.014280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:42:46.026843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:42:46.867825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:46.885118 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:42:46.926689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935450056.mount: Deactivated successfully. Nov 12 17:42:46.936176 containerd[2129]: time="2024-11-12T17:42:46.935644205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:46.937084 containerd[2129]: time="2024-11-12T17:42:46.937021229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Nov 12 17:42:46.937766 containerd[2129]: time="2024-11-12T17:42:46.937705097Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:46.946346 containerd[2129]: time="2024-11-12T17:42:46.946227125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:46.948849 containerd[2129]: time="2024-11-12T17:42:46.948609917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.339507398s" Nov 12 17:42:46.948849 containerd[2129]: time="2024-11-12T17:42:46.948668861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:42:46.995553 containerd[2129]: time="2024-11-12T17:42:46.995478222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 17:42:47.000833 kubelet[2811]: E1112 17:42:47.000769 2811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:42:47.007109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:42:47.007842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:42:47.526647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559541574.mount: Deactivated successfully. Nov 12 17:42:50.760416 containerd[2129]: time="2024-11-12T17:42:50.758886032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:50.761143 containerd[2129]: time="2024-11-12T17:42:50.761090564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Nov 12 17:42:50.761787 containerd[2129]: time="2024-11-12T17:42:50.761735216Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:50.767957 containerd[2129]: time="2024-11-12T17:42:50.767874836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:50.770460 containerd[2129]: time="2024-11-12T17:42:50.770407976Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.77485515s" Nov 12 17:42:50.770733 containerd[2129]: time="2024-11-12T17:42:50.770601872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 17:42:52.479233 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 17:42:57.116311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 17:42:57.126943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:42:57.449954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:57.455618 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:42:57.551239 kubelet[2948]: E1112 17:42:57.551147 2948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:42:57.556845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:42:57.557246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:42:57.724131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:57.736987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:42:57.780652 systemd[1]: Reloading requested from client PID 2964 ('systemctl') (unit session-7.scope)... Nov 12 17:42:57.780883 systemd[1]: Reloading... Nov 12 17:42:57.977564 zram_generator::config[3007]: No configuration found. Nov 12 17:42:58.250927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:42:58.409989 systemd[1]: Reloading finished in 628 ms. Nov 12 17:42:58.499636 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 17:42:58.499909 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 17:42:58.500506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:58.508871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:42:58.789972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:42:58.807205 (kubelet)[3079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:42:58.892258 kubelet[3079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:42:58.892258 kubelet[3079]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:42:58.892258 kubelet[3079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:42:58.894082 kubelet[3079]: I1112 17:42:58.893990 3079 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:42:59.690090 kubelet[3079]: I1112 17:42:59.690028 3079 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:42:59.690090 kubelet[3079]: I1112 17:42:59.690092 3079 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:42:59.690471 kubelet[3079]: I1112 17:42:59.690433 3079 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:42:59.719736 kubelet[3079]: I1112 17:42:59.718961 3079 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:42:59.720195 kubelet[3079]: E1112 17:42:59.720167 3079 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.733928 kubelet[3079]: I1112 17:42:59.733885 3079 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:42:59.736773 kubelet[3079]: I1112 17:42:59.736715 3079 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:42:59.737098 kubelet[3079]: I1112 17:42:59.737052 3079 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:42:59.737266 kubelet[3079]: I1112 17:42:59.737103 3079 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:42:59.737266 kubelet[3079]: I1112 17:42:59.737125 3079 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:42:59.737393 kubelet[3079]: I1112 17:42:59.737331 3079 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:42:59.743378 kubelet[3079]: I1112 17:42:59.743325 3079 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:42:59.743378 kubelet[3079]: I1112 17:42:59.743379 3079 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:42:59.745475 kubelet[3079]: I1112 17:42:59.743422 3079 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:42:59.745475 kubelet[3079]: I1112 17:42:59.743455 3079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:42:59.745475 kubelet[3079]: W1112 17:42:59.744139 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-95&limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.745475 kubelet[3079]: E1112 17:42:59.744242 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-95&limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.750222 kubelet[3079]: I1112 17:42:59.750185 3079 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:42:59.750931 kubelet[3079]: I1112 17:42:59.750902 3079 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:42:59.751133 kubelet[3079]: W1112 17:42:59.751113 3079 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:42:59.753558 kubelet[3079]: I1112 17:42:59.753490 3079 server.go:1256] "Started kubelet" Nov 12 17:42:59.760461 kubelet[3079]: W1112 17:42:59.760368 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.760461 kubelet[3079]: E1112 17:42:59.760476 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.765784 kubelet[3079]: I1112 17:42:59.765746 3079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:42:59.768717 kubelet[3079]: E1112 17:42:59.768664 3079 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.95:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.95:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-95.1807497c407523b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-95,UID:ip-172-31-27-95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-95,},FirstTimestamp:2024-11-12 17:42:59.753436085 +0000 UTC m=+0.938582358,LastTimestamp:2024-11-12 17:42:59.753436085 +0000 UTC m=+0.938582358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-95,}" Nov 12 17:42:59.773301 kubelet[3079]: I1112 17:42:59.773242 3079 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:42:59.774730 kubelet[3079]: I1112 17:42:59.774674 3079 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:42:59.776681 kubelet[3079]: I1112 17:42:59.776472 3079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:42:59.776905 kubelet[3079]: I1112 17:42:59.776868 3079 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:42:59.777255 kubelet[3079]: I1112 17:42:59.777214 3079 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:42:59.779817 kubelet[3079]: I1112 17:42:59.779220 3079 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:42:59.779817 kubelet[3079]: I1112 17:42:59.779344 3079 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:42:59.781014 kubelet[3079]: W1112 17:42:59.780941 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.781221 kubelet[3079]: E1112 17:42:59.781198 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.781489 kubelet[3079]: E1112 17:42:59.781463 3079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-95?timeout=10s\": dial tcp 172.31.27.95:6443: connect: connection refused" interval="200ms" Nov 12 17:42:59.782254 kubelet[3079]: I1112 17:42:59.782222 3079 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:42:59.782596 kubelet[3079]: I1112 17:42:59.782565 3079 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:42:59.785189 kubelet[3079]: I1112 17:42:59.785134 3079 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:42:59.797146 kubelet[3079]: E1112 17:42:59.797095 3079 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:42:59.815439 kubelet[3079]: I1112 17:42:59.815378 3079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:42:59.817651 kubelet[3079]: I1112 17:42:59.817602 3079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:42:59.817651 kubelet[3079]: I1112 17:42:59.817647 3079 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:42:59.817851 kubelet[3079]: I1112 17:42:59.817678 3079 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:42:59.817851 kubelet[3079]: E1112 17:42:59.817752 3079 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:42:59.827090 kubelet[3079]: W1112 17:42:59.827043 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.827302 kubelet[3079]: E1112 17:42:59.827280 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:42:59.828925 kubelet[3079]: I1112 17:42:59.828891 3079 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:42:59.829108 kubelet[3079]: I1112 17:42:59.829088 3079 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:42:59.829232 kubelet[3079]: I1112 17:42:59.829214 3079 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:42:59.832254 kubelet[3079]: I1112 17:42:59.832218 3079 policy_none.go:49] "None policy: Start" Nov 12 17:42:59.833463 kubelet[3079]: I1112 17:42:59.833435 3079 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:42:59.833662 kubelet[3079]: I1112 17:42:59.833642 3079 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:42:59.841980 kubelet[3079]: I1112 17:42:59.841940 3079 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:42:59.843567 kubelet[3079]: I1112 17:42:59.842511 3079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:42:59.853601 kubelet[3079]: E1112 17:42:59.853556 3079 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-95\" not found" Nov 12 17:42:59.880916 kubelet[3079]: I1112 17:42:59.880873 3079 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-95" Nov 12 17:42:59.881700 kubelet[3079]: E1112 17:42:59.881674 3079 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.95:6443/api/v1/nodes\": dial tcp 172.31.27.95:6443: connect: connection refused" node="ip-172-31-27-95" Nov 12 17:42:59.917944 kubelet[3079]: I1112 17:42:59.917897 3079 topology_manager.go:215] "Topology Admit Handler" podUID="21ee118e8b8eb730a0051efaeda1593f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-95" Nov 12 17:42:59.920442 kubelet[3079]: I1112 17:42:59.920286 3079 topology_manager.go:215] "Topology Admit Handler" podUID="f530fb823565ee5e4b05a00ac7a18235" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-95" Nov 12 17:42:59.924563 kubelet[3079]: I1112 17:42:59.922456 3079 topology_manager.go:215] "Topology Admit Handler" podUID="78b0e1c023a5277cd70064717c43cba9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-95" Nov 12 17:42:59.982937 kubelet[3079]: E1112 17:42:59.982793 3079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-95?timeout=10s\": dial tcp 172.31.27.95:6443: connect: connection refused" interval="400ms" Nov 12 17:43:00.080358 kubelet[3079]: I1112 17:43:00.080290 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:00.080472 kubelet[3079]: I1112 17:43:00.080389 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b0e1c023a5277cd70064717c43cba9-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-95\" (UID: \"78b0e1c023a5277cd70064717c43cba9\") " pod="kube-system/kube-scheduler-ip-172-31-27-95" Nov 12 17:43:00.080472 kubelet[3079]: I1112 17:43:00.080439 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21ee118e8b8eb730a0051efaeda1593f-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-95\" (UID: \"21ee118e8b8eb730a0051efaeda1593f\") " pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:00.080639 kubelet[3079]: I1112 17:43:00.080484 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:00.080639 kubelet[3079]: I1112 17:43:00.080556 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:00.080639 kubelet[3079]: I1112 17:43:00.080606 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:00.080794 kubelet[3079]: I1112 17:43:00.080657 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:00.080794 kubelet[3079]: I1112 17:43:00.080702 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21ee118e8b8eb730a0051efaeda1593f-ca-certs\") pod \"kube-apiserver-ip-172-31-27-95\" (UID: \"21ee118e8b8eb730a0051efaeda1593f\") " pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:00.080794 kubelet[3079]: I1112 17:43:00.080746 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21ee118e8b8eb730a0051efaeda1593f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-95\" (UID: \"21ee118e8b8eb730a0051efaeda1593f\") " pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:00.084366 kubelet[3079]: I1112 17:43:00.084300 3079 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-95" Nov 12 17:43:00.084870 kubelet[3079]: E1112 17:43:00.084837 3079 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.95:6443/api/v1/nodes\": dial tcp 172.31.27.95:6443: connect: connection refused" node="ip-172-31-27-95" Nov 12 17:43:00.230669 containerd[2129]: time="2024-11-12T17:43:00.230606907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-95,Uid:21ee118e8b8eb730a0051efaeda1593f,Namespace:kube-system,Attempt:0,}" Nov 12 17:43:00.237222 containerd[2129]: time="2024-11-12T17:43:00.236986599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-95,Uid:f530fb823565ee5e4b05a00ac7a18235,Namespace:kube-system,Attempt:0,}" Nov 12 17:43:00.248162 containerd[2129]: time="2024-11-12T17:43:00.248096475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-95,Uid:78b0e1c023a5277cd70064717c43cba9,Namespace:kube-system,Attempt:0,}" Nov 12 17:43:00.383903 kubelet[3079]: E1112 17:43:00.383854 3079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-95?timeout=10s\": dial tcp 172.31.27.95:6443: connect: connection refused" interval="800ms" Nov 12 17:43:00.487877 kubelet[3079]: I1112 17:43:00.487278 3079 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-95" Nov 12 17:43:00.487877 kubelet[3079]: E1112 17:43:00.487779 3079 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.95:6443/api/v1/nodes\": dial tcp 172.31.27.95:6443: connect: connection refused" node="ip-172-31-27-95" Nov 12 17:43:00.705316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532493450.mount: Deactivated successfully. Nov 12 17:43:00.712152 containerd[2129]: time="2024-11-12T17:43:00.712076994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:43:00.713950 containerd[2129]: time="2024-11-12T17:43:00.713870022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:43:00.715797 containerd[2129]: time="2024-11-12T17:43:00.715729554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:43:00.715906 containerd[2129]: time="2024-11-12T17:43:00.715800690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 12 17:43:00.717424 containerd[2129]: time="2024-11-12T17:43:00.716987958Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:43:00.718905 containerd[2129]: time="2024-11-12T17:43:00.718733898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:43:00.719745 containerd[2129]: time="2024-11-12T17:43:00.719300298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:43:00.726825 containerd[2129]: time="2024-11-12T17:43:00.726728034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:43:00.728929 containerd[2129]: time="2024-11-12T17:43:00.728572950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.368259ms" Nov 12 17:43:00.732944 containerd[2129]: time="2024-11-12T17:43:00.732819426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.098435ms" Nov 12 17:43:00.741059 containerd[2129]: time="2024-11-12T17:43:00.740039730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.941459ms" Nov 12 17:43:00.807280 kubelet[3079]: W1112 17:43:00.807152 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-95&limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:00.807280 kubelet[3079]: E1112 17:43:00.807247 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-95&limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:00.921165 kubelet[3079]: W1112 17:43:00.921011 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:00.921165 kubelet[3079]: E1112 17:43:00.921099 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:00.933547 containerd[2129]: time="2024-11-12T17:43:00.933275923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:00.934611 containerd[2129]: time="2024-11-12T17:43:00.933860791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:00.935464 containerd[2129]: time="2024-11-12T17:43:00.934581823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:00.935464 containerd[2129]: time="2024-11-12T17:43:00.934904239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:00.946110 containerd[2129]: time="2024-11-12T17:43:00.945704551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:00.946332 containerd[2129]: time="2024-11-12T17:43:00.946153195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:00.946332 containerd[2129]: time="2024-11-12T17:43:00.946238539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:00.947227 containerd[2129]: time="2024-11-12T17:43:00.946684747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:00.947329 containerd[2129]: time="2024-11-12T17:43:00.946890187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:00.947329 containerd[2129]: time="2024-11-12T17:43:00.947049319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:00.947329 containerd[2129]: time="2024-11-12T17:43:00.947077087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:00.949502 containerd[2129]: time="2024-11-12T17:43:00.947699971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:01.069303 containerd[2129]: time="2024-11-12T17:43:01.069155415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-95,Uid:f530fb823565ee5e4b05a00ac7a18235,Namespace:kube-system,Attempt:0,} returns sandbox id \"225f362e1e3fb9b5f9f2be29168d55a5ee26110923a22db24fa594c2c10d9a06\"" Nov 12 17:43:01.092415 containerd[2129]: time="2024-11-12T17:43:01.092226688Z" level=info msg="CreateContainer within sandbox \"225f362e1e3fb9b5f9f2be29168d55a5ee26110923a22db24fa594c2c10d9a06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:43:01.125654 containerd[2129]: time="2024-11-12T17:43:01.125001796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-95,Uid:78b0e1c023a5277cd70064717c43cba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55232f7db04b405181c34eecd0603955d0de01b7a20512b25a2a15992e91e17\"" Nov 12 17:43:01.131041 containerd[2129]: time="2024-11-12T17:43:01.130296664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-95,Uid:21ee118e8b8eb730a0051efaeda1593f,Namespace:kube-system,Attempt:0,} returns sandbox id \"024101da2a253c3d3de7fcc1c5bbba0041d9b7d545e6316e72e52407f724bde6\"" Nov 12 17:43:01.131041 containerd[2129]: time="2024-11-12T17:43:01.130864588Z" level=info msg="CreateContainer within sandbox \"225f362e1e3fb9b5f9f2be29168d55a5ee26110923a22db24fa594c2c10d9a06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f\"" Nov 12 17:43:01.133052 containerd[2129]: time="2024-11-12T17:43:01.132976180Z" level=info msg="StartContainer for \"d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f\"" Nov 12 17:43:01.134046 containerd[2129]: time="2024-11-12T17:43:01.133389400Z" level=info msg="CreateContainer within sandbox \"c55232f7db04b405181c34eecd0603955d0de01b7a20512b25a2a15992e91e17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:43:01.145562 kubelet[3079]: W1112 17:43:01.145383 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:01.145562 kubelet[3079]: E1112 17:43:01.145496 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:01.160888 containerd[2129]: time="2024-11-12T17:43:01.160826836Z" level=info msg="CreateContainer within sandbox \"024101da2a253c3d3de7fcc1c5bbba0041d9b7d545e6316e72e52407f724bde6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:43:01.165175 containerd[2129]: time="2024-11-12T17:43:01.164707420Z" level=info msg="CreateContainer within sandbox \"c55232f7db04b405181c34eecd0603955d0de01b7a20512b25a2a15992e91e17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a\"" Nov 12 17:43:01.165704 containerd[2129]: time="2024-11-12T17:43:01.165656512Z" level=info msg="StartContainer for \"e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a\"" Nov 12 17:43:01.186999 kubelet[3079]: E1112 17:43:01.186891 3079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-95?timeout=10s\": dial tcp 172.31.27.95:6443: connect: connection refused" interval="1.6s" Nov 12 17:43:01.196227 containerd[2129]: time="2024-11-12T17:43:01.196015840Z" level=info msg="CreateContainer within sandbox \"024101da2a253c3d3de7fcc1c5bbba0041d9b7d545e6316e72e52407f724bde6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64d724af050a4ea1b7ab604626088483e17f77d819d8c29754eb5322b2cd6c2c\"" Nov 12 17:43:01.199948 containerd[2129]: time="2024-11-12T17:43:01.199443652Z" level=info msg="StartContainer for \"64d724af050a4ea1b7ab604626088483e17f77d819d8c29754eb5322b2cd6c2c\"" Nov 12 17:43:01.298291 kubelet[3079]: I1112 17:43:01.297350 3079 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-95" Nov 12 17:43:01.298291 kubelet[3079]: E1112 17:43:01.298230 3079 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.95:6443/api/v1/nodes\": dial tcp 172.31.27.95:6443: connect: connection refused" node="ip-172-31-27-95" Nov 12 17:43:01.313659 kubelet[3079]: W1112 17:43:01.313453 3079 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:01.314331 kubelet[3079]: E1112 17:43:01.314299 3079 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.95:6443: connect: connection refused Nov 12 17:43:01.349937 containerd[2129]: time="2024-11-12T17:43:01.349369265Z" level=info msg="StartContainer for \"d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f\" returns successfully" Nov 12 17:43:01.406022 containerd[2129]: time="2024-11-12T17:43:01.405353105Z" level=info msg="StartContainer for \"e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a\" returns successfully" Nov 12 17:43:01.479562 containerd[2129]: time="2024-11-12T17:43:01.478925945Z" level=info msg="StartContainer for \"64d724af050a4ea1b7ab604626088483e17f77d819d8c29754eb5322b2cd6c2c\" returns successfully" Nov 12 17:43:02.904306 kubelet[3079]: I1112 17:43:02.904259 3079 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-95" Nov 12 17:43:05.303911 kubelet[3079]: E1112 17:43:05.303854 3079 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-95\" not found" node="ip-172-31-27-95" Nov 12 17:43:05.397996 kubelet[3079]: I1112 17:43:05.397775 3079 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-95" Nov 12 17:43:05.749690 kubelet[3079]: I1112 17:43:05.749627 3079 apiserver.go:52] "Watching apiserver" Nov 12 17:43:05.779459 kubelet[3079]: I1112 17:43:05.779414 3079 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:43:06.955056 update_engine[2099]: I20241112 17:43:06.954061 2099 update_attempter.cc:509] Updating boot flags... Nov 12 17:43:07.071575 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3362) Nov 12 17:43:08.618929 systemd[1]: Reloading requested from client PID 3446 ('systemctl') (unit session-7.scope)... Nov 12 17:43:08.619430 systemd[1]: Reloading... Nov 12 17:43:08.784610 zram_generator::config[3486]: No configuration found. Nov 12 17:43:09.067152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:43:09.274937 systemd[1]: Reloading finished in 654 ms. Nov 12 17:43:09.334109 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:43:09.348267 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:43:09.349751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:43:09.359211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:43:09.637894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:43:09.657433 (kubelet)[3556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:43:09.772650 kubelet[3556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:43:09.772650 kubelet[3556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:43:09.772650 kubelet[3556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:43:09.772650 kubelet[3556]: I1112 17:43:09.772833 3556 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:43:09.781808 kubelet[3556]: I1112 17:43:09.781748 3556 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:43:09.781984 kubelet[3556]: I1112 17:43:09.781965 3556 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:43:09.782401 kubelet[3556]: I1112 17:43:09.782380 3556 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:43:09.785500 kubelet[3556]: I1112 17:43:09.785462 3556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:43:09.789300 kubelet[3556]: I1112 17:43:09.789260 3556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:43:09.801514 kubelet[3556]: I1112 17:43:09.801455 3556 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:43:09.802438 kubelet[3556]: I1112 17:43:09.802406 3556 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.802770 3556 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.802807 3556 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.802828 3556 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.802882 3556 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.803080 3556 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.803107 3556 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:43:09.803136 kubelet[3556]: I1112 17:43:09.803143 3556 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:43:09.813792 kubelet[3556]: I1112 17:43:09.803166 3556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:43:09.813792 kubelet[3556]: I1112 17:43:09.804952 3556 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:43:09.813792 kubelet[3556]: I1112 17:43:09.805283 3556 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:43:09.813792 kubelet[3556]: I1112 17:43:09.806064 3556 server.go:1256] "Started kubelet" Nov 12 17:43:09.813792 kubelet[3556]: I1112 17:43:09.809177 3556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:43:09.820622 kubelet[3556]: I1112 17:43:09.820176 3556 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:43:09.826964 kubelet[3556]: I1112 17:43:09.826909 3556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:43:09.827834 kubelet[3556]: I1112 17:43:09.827795 3556 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:43:09.842319 kubelet[3556]: I1112 17:43:09.842275 3556 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:43:09.845590 kubelet[3556]: I1112 17:43:09.844706 3556 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:43:09.846314 kubelet[3556]: I1112 17:43:09.846270 3556 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:43:09.857894 kubelet[3556]: I1112 17:43:09.857852 3556 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:43:09.858131 kubelet[3556]: I1112 17:43:09.858026 3556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:43:09.864818 kubelet[3556]: I1112 17:43:09.864009 3556 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:43:09.885148 kubelet[3556]: I1112 17:43:09.885095 3556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:43:09.888571 kubelet[3556]: I1112 17:43:09.888292 3556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:43:09.888571 kubelet[3556]: I1112 17:43:09.888349 3556 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:43:09.888571 kubelet[3556]: I1112 17:43:09.888382 3556 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:43:09.888571 kubelet[3556]: E1112 17:43:09.888469 3556 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:43:09.918858 kubelet[3556]: I1112 17:43:09.918780 3556 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:43:09.919996 kubelet[3556]: E1112 17:43:09.919843 3556 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:43:09.938683 kubelet[3556]: E1112 17:43:09.938634 3556 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Nov 12 17:43:09.940872 kubelet[3556]: I1112 17:43:09.940824 3556 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-95" Nov 12 17:43:09.962277 kubelet[3556]: I1112 17:43:09.962228 3556 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-95" Nov 12 17:43:09.962403 kubelet[3556]: I1112 17:43:09.962343 3556 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-95" Nov 12 17:43:09.989833 kubelet[3556]: E1112 17:43:09.989142 3556 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 17:43:10.066883 kubelet[3556]: I1112 17:43:10.066838 3556 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:43:10.066883 kubelet[3556]: I1112 17:43:10.066878 3556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:43:10.067081 kubelet[3556]: I1112 17:43:10.066914 3556 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:43:10.067563 kubelet[3556]: I1112 17:43:10.067184 3556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:43:10.067563 kubelet[3556]: I1112 17:43:10.067233 3556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:43:10.067563 kubelet[3556]: I1112 17:43:10.067250 3556 policy_none.go:49] "None policy: Start" Nov 12 17:43:10.068920 kubelet[3556]: I1112 17:43:10.068677 3556 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:43:10.068920 kubelet[3556]: I1112 17:43:10.068731 3556 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:43:10.069104 kubelet[3556]: I1112 17:43:10.068959 3556 state_mem.go:75] "Updated machine memory state" Nov 12 17:43:10.071751 kubelet[3556]: I1112 17:43:10.071712 3556 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:43:10.075282 kubelet[3556]: I1112 17:43:10.075215 3556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:43:10.189559 kubelet[3556]: I1112 17:43:10.189387 3556 topology_manager.go:215] "Topology Admit Handler" podUID="f530fb823565ee5e4b05a00ac7a18235" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.193464 kubelet[3556]: I1112 17:43:10.190488 3556 topology_manager.go:215] "Topology Admit Handler" podUID="78b0e1c023a5277cd70064717c43cba9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-95" Nov 12 17:43:10.193464 kubelet[3556]: I1112 17:43:10.191675 3556 topology_manager.go:215] "Topology Admit Handler" podUID="21ee118e8b8eb730a0051efaeda1593f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-95" Nov 12 17:43:10.202298 kubelet[3556]: E1112 17:43:10.202255 3556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-95\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.203690 kubelet[3556]: E1112 17:43:10.203634 3556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-27-95\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-95" Nov 12 17:43:10.208843 kubelet[3556]: E1112 17:43:10.208793 3556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-95\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:10.248678 kubelet[3556]: I1112 17:43:10.248624 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.248820 kubelet[3556]: I1112 17:43:10.248708 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b0e1c023a5277cd70064717c43cba9-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-95\" (UID: \"78b0e1c023a5277cd70064717c43cba9\") " pod="kube-system/kube-scheduler-ip-172-31-27-95" Nov 12 17:43:10.248820 kubelet[3556]: I1112 17:43:10.248755 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.248820 kubelet[3556]: I1112 17:43:10.248802 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21ee118e8b8eb730a0051efaeda1593f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-95\" (UID: \"21ee118e8b8eb730a0051efaeda1593f\") " pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:10.249033 kubelet[3556]: I1112 17:43:10.248846 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.249033 kubelet[3556]: I1112 17:43:10.248890 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.249033 kubelet[3556]: I1112 17:43:10.248938 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f530fb823565ee5e4b05a00ac7a18235-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-95\" (UID: \"f530fb823565ee5e4b05a00ac7a18235\") " pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:10.249033 kubelet[3556]: I1112 17:43:10.248982 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21ee118e8b8eb730a0051efaeda1593f-ca-certs\") pod \"kube-apiserver-ip-172-31-27-95\" (UID: \"21ee118e8b8eb730a0051efaeda1593f\") " pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:10.250015 kubelet[3556]: I1112 17:43:10.249975 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21ee118e8b8eb730a0051efaeda1593f-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-95\" (UID: \"21ee118e8b8eb730a0051efaeda1593f\") " pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:10.804579 kubelet[3556]: I1112 17:43:10.804498 3556 apiserver.go:52] "Watching apiserver" Nov 12 17:43:10.847008 kubelet[3556]: I1112 17:43:10.846950 3556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:43:10.993553 kubelet[3556]: E1112 17:43:10.993246 3556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-95\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-95" Nov 12 17:43:11.003147 kubelet[3556]: E1112 17:43:11.002018 3556 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-95\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-95" Nov 12 17:43:11.147754 kubelet[3556]: I1112 17:43:11.147692 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-95" podStartSLOduration=5.145600657 podStartE2EDuration="5.145600657s" podCreationTimestamp="2024-11-12 17:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:11.110501125 +0000 UTC m=+1.444730984" watchObservedRunningTime="2024-11-12 17:43:11.145600657 +0000 UTC m=+1.479830528" Nov 12 17:43:11.150688 kubelet[3556]: I1112 17:43:11.150505 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-95" podStartSLOduration=4.15024119 podStartE2EDuration="4.15024119s" podCreationTimestamp="2024-11-12 17:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:11.149780485 +0000 UTC m=+1.484010380" watchObservedRunningTime="2024-11-12 17:43:11.15024119 +0000 UTC m=+1.484471061" Nov 12 17:43:11.209754 kubelet[3556]: I1112 17:43:11.209696 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-95" podStartSLOduration=5.209634458 podStartE2EDuration="5.209634458s" podCreationTimestamp="2024-11-12 17:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:11.207484598 +0000 UTC m=+1.541714481" watchObservedRunningTime="2024-11-12 17:43:11.209634458 +0000 UTC m=+1.543864329" Nov 12 17:43:14.619137 sudo[2479]: pam_unix(sudo:session): session closed for user root Nov 12 17:43:14.642481 sshd[2475]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:14.648345 systemd-logind[2095]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:43:14.649671 systemd[1]: sshd@6-172.31.27.95:22-139.178.89.65:49550.service: Deactivated successfully. Nov 12 17:43:14.658618 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:43:14.663621 systemd-logind[2095]: Removed session 7. Nov 12 17:43:24.773458 kubelet[3556]: I1112 17:43:24.772336 3556 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:43:24.780021 containerd[2129]: time="2024-11-12T17:43:24.775821821Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:43:24.784366 kubelet[3556]: I1112 17:43:24.777185 3556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:43:25.605482 kubelet[3556]: I1112 17:43:25.604962 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ac946ef0-da0a-4153-ac1e-23705ee5cb6d" podNamespace="kube-system" podName="kube-proxy-bks6s" Nov 12 17:43:25.653858 kubelet[3556]: I1112 17:43:25.653800 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac946ef0-da0a-4153-ac1e-23705ee5cb6d-xtables-lock\") pod \"kube-proxy-bks6s\" (UID: \"ac946ef0-da0a-4153-ac1e-23705ee5cb6d\") " pod="kube-system/kube-proxy-bks6s" Nov 12 17:43:25.654022 kubelet[3556]: I1112 17:43:25.653883 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfms\" (UniqueName: \"kubernetes.io/projected/ac946ef0-da0a-4153-ac1e-23705ee5cb6d-kube-api-access-4cfms\") pod \"kube-proxy-bks6s\" (UID: \"ac946ef0-da0a-4153-ac1e-23705ee5cb6d\") " pod="kube-system/kube-proxy-bks6s" Nov 12 17:43:25.654022 kubelet[3556]: I1112 17:43:25.653934 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac946ef0-da0a-4153-ac1e-23705ee5cb6d-kube-proxy\") pod \"kube-proxy-bks6s\" (UID: \"ac946ef0-da0a-4153-ac1e-23705ee5cb6d\") " pod="kube-system/kube-proxy-bks6s" Nov 12 17:43:25.654022 kubelet[3556]: I1112 17:43:25.653981 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac946ef0-da0a-4153-ac1e-23705ee5cb6d-lib-modules\") pod \"kube-proxy-bks6s\" (UID: \"ac946ef0-da0a-4153-ac1e-23705ee5cb6d\") " pod="kube-system/kube-proxy-bks6s" Nov 12 17:43:25.909985 kubelet[3556]: I1112 17:43:25.909922 3556 topology_manager.go:215] "Topology Admit Handler" podUID="541f6e79-8969-4ec0-8be5-427a9588a548" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-j8xgl" Nov 12 17:43:25.918983 containerd[2129]: time="2024-11-12T17:43:25.917939239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bks6s,Uid:ac946ef0-da0a-4153-ac1e-23705ee5cb6d,Namespace:kube-system,Attempt:0,}" Nov 12 17:43:25.955781 kubelet[3556]: I1112 17:43:25.955481 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/541f6e79-8969-4ec0-8be5-427a9588a548-var-lib-calico\") pod \"tigera-operator-56b74f76df-j8xgl\" (UID: \"541f6e79-8969-4ec0-8be5-427a9588a548\") " pod="tigera-operator/tigera-operator-56b74f76df-j8xgl" Nov 12 17:43:25.955781 kubelet[3556]: I1112 17:43:25.955608 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2lb5\" (UniqueName: \"kubernetes.io/projected/541f6e79-8969-4ec0-8be5-427a9588a548-kube-api-access-c2lb5\") pod \"tigera-operator-56b74f76df-j8xgl\" (UID: \"541f6e79-8969-4ec0-8be5-427a9588a548\") " pod="tigera-operator/tigera-operator-56b74f76df-j8xgl" Nov 12 17:43:25.975164 containerd[2129]: time="2024-11-12T17:43:25.975031507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:25.975490 containerd[2129]: time="2024-11-12T17:43:25.975368179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:25.975490 containerd[2129]: time="2024-11-12T17:43:25.975445015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:25.975981 containerd[2129]: time="2024-11-12T17:43:25.975857695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:26.052259 containerd[2129]: time="2024-11-12T17:43:26.052106764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bks6s,Uid:ac946ef0-da0a-4153-ac1e-23705ee5cb6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33db8f88b45ab6550e0a3bddae13b86d638e66b7d65615c9701f947769350b9f\"" Nov 12 17:43:26.064250 containerd[2129]: time="2024-11-12T17:43:26.064171036Z" level=info msg="CreateContainer within sandbox \"33db8f88b45ab6550e0a3bddae13b86d638e66b7d65615c9701f947769350b9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:43:26.096935 containerd[2129]: time="2024-11-12T17:43:26.096856360Z" level=info msg="CreateContainer within sandbox \"33db8f88b45ab6550e0a3bddae13b86d638e66b7d65615c9701f947769350b9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"174eb489c06232855e2dd69cf02e3ee66e9062f0d8b716c4c5126a3f229f4f87\"" Nov 12 17:43:26.098941 containerd[2129]: time="2024-11-12T17:43:26.098660176Z" level=info msg="StartContainer for \"174eb489c06232855e2dd69cf02e3ee66e9062f0d8b716c4c5126a3f229f4f87\"" Nov 12 17:43:26.205121 containerd[2129]: time="2024-11-12T17:43:26.204904504Z" level=info msg="StartContainer for \"174eb489c06232855e2dd69cf02e3ee66e9062f0d8b716c4c5126a3f229f4f87\" returns successfully" Nov 12 17:43:26.233994 containerd[2129]: time="2024-11-12T17:43:26.233927296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-j8xgl,Uid:541f6e79-8969-4ec0-8be5-427a9588a548,Namespace:tigera-operator,Attempt:0,}" Nov 12 17:43:26.285977 containerd[2129]: time="2024-11-12T17:43:26.285690269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:26.286684 containerd[2129]: time="2024-11-12T17:43:26.285930977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:26.286684 containerd[2129]: time="2024-11-12T17:43:26.286120565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:26.288333 containerd[2129]: time="2024-11-12T17:43:26.287595677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:26.404716 containerd[2129]: time="2024-11-12T17:43:26.404625449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-j8xgl,Uid:541f6e79-8969-4ec0-8be5-427a9588a548,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1222554d87b9c11d8e3563eb512f60da0ac570fc873a5c96950592034c18c260\"" Nov 12 17:43:26.413624 containerd[2129]: time="2024-11-12T17:43:26.413372069Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 17:43:28.447470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472172595.mount: Deactivated successfully. Nov 12 17:43:29.025281 containerd[2129]: time="2024-11-12T17:43:29.025226214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:29.028298 containerd[2129]: time="2024-11-12T17:43:29.027905370Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123633" Nov 12 17:43:29.029738 containerd[2129]: time="2024-11-12T17:43:29.029630238Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:29.034243 containerd[2129]: time="2024-11-12T17:43:29.034160790Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:29.036848 containerd[2129]: time="2024-11-12T17:43:29.036268890Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 2.622638041s" Nov 12 17:43:29.036848 containerd[2129]: time="2024-11-12T17:43:29.036327774Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\"" Nov 12 17:43:29.041433 containerd[2129]: time="2024-11-12T17:43:29.041376462Z" level=info msg="CreateContainer within sandbox \"1222554d87b9c11d8e3563eb512f60da0ac570fc873a5c96950592034c18c260\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 17:43:29.057941 containerd[2129]: time="2024-11-12T17:43:29.057860430Z" level=info msg="CreateContainer within sandbox \"1222554d87b9c11d8e3563eb512f60da0ac570fc873a5c96950592034c18c260\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302\"" Nov 12 17:43:29.059460 containerd[2129]: time="2024-11-12T17:43:29.059342634Z" level=info msg="StartContainer for \"9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302\"" Nov 12 17:43:29.161231 containerd[2129]: time="2024-11-12T17:43:29.161027215Z" level=info msg="StartContainer for \"9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302\" returns successfully" Nov 12 17:43:30.049841 kubelet[3556]: I1112 17:43:30.049491 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bks6s" podStartSLOduration=5.049408459 podStartE2EDuration="5.049408459s" podCreationTimestamp="2024-11-12 17:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:27.039678004 +0000 UTC m=+17.373907887" watchObservedRunningTime="2024-11-12 17:43:30.049408459 +0000 UTC m=+20.383638354" Nov 12 17:43:30.051142 kubelet[3556]: I1112 17:43:30.050785 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-j8xgl" podStartSLOduration=2.423209394 podStartE2EDuration="5.050714827s" podCreationTimestamp="2024-11-12 17:43:25 +0000 UTC" firstStartedPulling="2024-11-12 17:43:26.409771469 +0000 UTC m=+16.744001328" lastFinishedPulling="2024-11-12 17:43:29.037276902 +0000 UTC m=+19.371506761" observedRunningTime="2024-11-12 17:43:30.048418399 +0000 UTC m=+20.382648258" watchObservedRunningTime="2024-11-12 17:43:30.050714827 +0000 UTC m=+20.384944782" Nov 12 17:43:34.313849 kubelet[3556]: I1112 17:43:34.312845 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e73d8893-0de1-41d2-8454-177ed3d0dd66" podNamespace="calico-system" podName="calico-typha-5dd5f95b76-hrbhx" Nov 12 17:43:34.408956 kubelet[3556]: I1112 17:43:34.408891 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e73d8893-0de1-41d2-8454-177ed3d0dd66-typha-certs\") pod \"calico-typha-5dd5f95b76-hrbhx\" (UID: \"e73d8893-0de1-41d2-8454-177ed3d0dd66\") " pod="calico-system/calico-typha-5dd5f95b76-hrbhx" Nov 12 17:43:34.408956 kubelet[3556]: I1112 17:43:34.408987 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkr7j\" (UniqueName: \"kubernetes.io/projected/e73d8893-0de1-41d2-8454-177ed3d0dd66-kube-api-access-qkr7j\") pod \"calico-typha-5dd5f95b76-hrbhx\" (UID: \"e73d8893-0de1-41d2-8454-177ed3d0dd66\") " pod="calico-system/calico-typha-5dd5f95b76-hrbhx" Nov 12 17:43:34.409302 kubelet[3556]: I1112 17:43:34.409045 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e73d8893-0de1-41d2-8454-177ed3d0dd66-tigera-ca-bundle\") pod \"calico-typha-5dd5f95b76-hrbhx\" (UID: \"e73d8893-0de1-41d2-8454-177ed3d0dd66\") " pod="calico-system/calico-typha-5dd5f95b76-hrbhx" Nov 12 17:43:34.516566 kubelet[3556]: I1112 17:43:34.514878 3556 topology_manager.go:215] "Topology Admit Handler" podUID="769b504f-11f1-47a0-9b2d-ad216c9fd2f8" podNamespace="calico-system" podName="calico-node-ns7qn" Nov 12 17:43:34.616643 kubelet[3556]: I1112 17:43:34.616582 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-xtables-lock\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619395 kubelet[3556]: I1112 17:43:34.617944 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-lib-modules\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619395 kubelet[3556]: I1112 17:43:34.618024 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-node-certs\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619395 kubelet[3556]: I1112 17:43:34.618096 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-cni-log-dir\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619395 kubelet[3556]: I1112 17:43:34.618165 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-flexvol-driver-host\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619395 kubelet[3556]: I1112 17:43:34.618225 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-cni-net-dir\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619832 kubelet[3556]: I1112 17:43:34.618282 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-policysync\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619832 kubelet[3556]: I1112 17:43:34.618331 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-var-lib-calico\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619832 kubelet[3556]: I1112 17:43:34.618426 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-tigera-ca-bundle\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.619832 kubelet[3556]: I1112 17:43:34.618490 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-var-run-calico\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.621570 kubelet[3556]: I1112 17:43:34.620136 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-cni-bin-dir\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.621570 kubelet[3556]: I1112 17:43:34.620220 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59c69\" (UniqueName: \"kubernetes.io/projected/769b504f-11f1-47a0-9b2d-ad216c9fd2f8-kube-api-access-59c69\") pod \"calico-node-ns7qn\" (UID: \"769b504f-11f1-47a0-9b2d-ad216c9fd2f8\") " pod="calico-system/calico-node-ns7qn" Nov 12 17:43:34.627646 containerd[2129]: time="2024-11-12T17:43:34.627574646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dd5f95b76-hrbhx,Uid:e73d8893-0de1-41d2-8454-177ed3d0dd66,Namespace:calico-system,Attempt:0,}" Nov 12 17:43:34.663237 kubelet[3556]: I1112 17:43:34.663072 3556 topology_manager.go:215] "Topology Admit Handler" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" podNamespace="calico-system" podName="csi-node-driver-9dq4p" Nov 12 17:43:34.671423 kubelet[3556]: E1112 17:43:34.671281 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:34.723336 kubelet[3556]: I1112 17:43:34.722470 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f12c29ab-8a74-4cf9-a191-0b1413424edc-varrun\") pod \"csi-node-driver-9dq4p\" (UID: \"f12c29ab-8a74-4cf9-a191-0b1413424edc\") " pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:34.726736 kubelet[3556]: I1112 17:43:34.726283 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f12c29ab-8a74-4cf9-a191-0b1413424edc-socket-dir\") pod \"csi-node-driver-9dq4p\" (UID: \"f12c29ab-8a74-4cf9-a191-0b1413424edc\") " pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:34.729801 kubelet[3556]: E1112 17:43:34.729756 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.731552 kubelet[3556]: W1112 17:43:34.730273 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.731552 kubelet[3556]: E1112 17:43:34.730371 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.734703 kubelet[3556]: E1112 17:43:34.734180 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.734703 kubelet[3556]: W1112 17:43:34.734212 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.734703 kubelet[3556]: E1112 17:43:34.734602 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.736957 kubelet[3556]: E1112 17:43:34.736917 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.736957 kubelet[3556]: W1112 17:43:34.736953 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.737222 kubelet[3556]: E1112 17:43:34.737044 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.741960 kubelet[3556]: E1112 17:43:34.741853 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.741960 kubelet[3556]: W1112 17:43:34.741894 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.742574 kubelet[3556]: E1112 17:43:34.742148 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.744588 kubelet[3556]: E1112 17:43:34.744509 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.744588 kubelet[3556]: W1112 17:43:34.744580 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.746762 kubelet[3556]: E1112 17:43:34.744734 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.746762 kubelet[3556]: I1112 17:43:34.744806 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f12c29ab-8a74-4cf9-a191-0b1413424edc-registration-dir\") pod \"csi-node-driver-9dq4p\" (UID: \"f12c29ab-8a74-4cf9-a191-0b1413424edc\") " pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:34.747402 kubelet[3556]: E1112 17:43:34.747357 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.747402 kubelet[3556]: W1112 17:43:34.747394 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.747853 kubelet[3556]: E1112 17:43:34.747582 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.749728 kubelet[3556]: E1112 17:43:34.749674 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.749728 kubelet[3556]: W1112 17:43:34.749715 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.749728 kubelet[3556]: E1112 17:43:34.749769 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.756623 kubelet[3556]: E1112 17:43:34.754566 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.756623 kubelet[3556]: W1112 17:43:34.754605 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.756623 kubelet[3556]: E1112 17:43:34.754660 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.757293 kubelet[3556]: E1112 17:43:34.757037 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.757293 kubelet[3556]: W1112 17:43:34.757063 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.757293 kubelet[3556]: E1112 17:43:34.757132 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.757471 containerd[2129]: time="2024-11-12T17:43:34.752937375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:34.757471 containerd[2129]: time="2024-11-12T17:43:34.753041727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:34.757471 containerd[2129]: time="2024-11-12T17:43:34.754108359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:34.757471 containerd[2129]: time="2024-11-12T17:43:34.754317555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:34.760559 kubelet[3556]: E1112 17:43:34.757730 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.760559 kubelet[3556]: W1112 17:43:34.758042 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.760559 kubelet[3556]: E1112 17:43:34.759493 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.762959 kubelet[3556]: E1112 17:43:34.762909 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.762959 kubelet[3556]: W1112 17:43:34.762951 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.764710 kubelet[3556]: E1112 17:43:34.763336 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.765287 kubelet[3556]: E1112 17:43:34.765218 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.765287 kubelet[3556]: W1112 17:43:34.765256 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.765287 kubelet[3556]: E1112 17:43:34.765437 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.770572 kubelet[3556]: E1112 17:43:34.767720 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.770572 kubelet[3556]: W1112 17:43:34.767760 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.770572 kubelet[3556]: E1112 17:43:34.767812 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.770572 kubelet[3556]: E1112 17:43:34.768815 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.770572 kubelet[3556]: W1112 17:43:34.768843 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.770959 kubelet[3556]: E1112 17:43:34.770891 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.770959 kubelet[3556]: W1112 17:43:34.770916 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.772110 kubelet[3556]: E1112 17:43:34.771714 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.772110 kubelet[3556]: W1112 17:43:34.771760 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.772110 kubelet[3556]: E1112 17:43:34.771808 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.773284 kubelet[3556]: E1112 17:43:34.772834 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.773284 kubelet[3556]: E1112 17:43:34.772905 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.774771 kubelet[3556]: I1112 17:43:34.774599 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f12c29ab-8a74-4cf9-a191-0b1413424edc-kubelet-dir\") pod \"csi-node-driver-9dq4p\" (UID: \"f12c29ab-8a74-4cf9-a191-0b1413424edc\") " pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:34.775124 kubelet[3556]: E1112 17:43:34.774816 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.775124 kubelet[3556]: W1112 17:43:34.774835 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.775124 kubelet[3556]: E1112 17:43:34.774880 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.777611 kubelet[3556]: E1112 17:43:34.776447 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.777611 kubelet[3556]: W1112 17:43:34.777164 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.777611 kubelet[3556]: E1112 17:43:34.777247 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.795608 kubelet[3556]: E1112 17:43:34.791511 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.795608 kubelet[3556]: W1112 17:43:34.791578 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.795608 kubelet[3556]: E1112 17:43:34.791822 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.798782 kubelet[3556]: E1112 17:43:34.798649 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.798782 kubelet[3556]: W1112 17:43:34.798725 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.800136 kubelet[3556]: E1112 17:43:34.800082 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.801714 kubelet[3556]: E1112 17:43:34.800940 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.802543 kubelet[3556]: W1112 17:43:34.802080 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.805275 kubelet[3556]: E1112 17:43:34.804341 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.806604 kubelet[3556]: E1112 17:43:34.805788 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.806604 kubelet[3556]: W1112 17:43:34.806139 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.806831 kubelet[3556]: E1112 17:43:34.806674 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.809855 kubelet[3556]: E1112 17:43:34.809308 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.809855 kubelet[3556]: W1112 17:43:34.809345 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.813230 kubelet[3556]: E1112 17:43:34.813082 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.813230 kubelet[3556]: W1112 17:43:34.813153 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.813499 kubelet[3556]: E1112 17:43:34.813275 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.813499 kubelet[3556]: E1112 17:43:34.813357 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.817684 kubelet[3556]: E1112 17:43:34.816116 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.817684 kubelet[3556]: W1112 17:43:34.816273 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.817684 kubelet[3556]: E1112 17:43:34.816638 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.820693 kubelet[3556]: E1112 17:43:34.820463 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.823290 kubelet[3556]: W1112 17:43:34.821874 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.823290 kubelet[3556]: E1112 17:43:34.821947 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.825566 kubelet[3556]: E1112 17:43:34.824788 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.825566 kubelet[3556]: W1112 17:43:34.824823 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.825566 kubelet[3556]: E1112 17:43:34.824897 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.830082 kubelet[3556]: E1112 17:43:34.829444 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.830082 kubelet[3556]: W1112 17:43:34.829478 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.830334 kubelet[3556]: E1112 17:43:34.830131 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.840184 kubelet[3556]: E1112 17:43:34.839398 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.840184 kubelet[3556]: W1112 17:43:34.839429 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.844165 kubelet[3556]: E1112 17:43:34.843669 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.844165 kubelet[3556]: W1112 17:43:34.843717 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.844990 kubelet[3556]: E1112 17:43:34.844631 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.844990 kubelet[3556]: W1112 17:43:34.844664 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.844990 kubelet[3556]: E1112 17:43:34.844700 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.845548 kubelet[3556]: E1112 17:43:34.845502 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.845769 kubelet[3556]: W1112 17:43:34.845740 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.846060 kubelet[3556]: E1112 17:43:34.846005 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.846415 kubelet[3556]: E1112 17:43:34.846219 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.847734 kubelet[3556]: E1112 17:43:34.847274 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.847734 kubelet[3556]: W1112 17:43:34.847306 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.847734 kubelet[3556]: E1112 17:43:34.847342 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.851345 kubelet[3556]: E1112 17:43:34.850881 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.851345 kubelet[3556]: W1112 17:43:34.851103 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.851345 kubelet[3556]: E1112 17:43:34.851144 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.855200 kubelet[3556]: E1112 17:43:34.852811 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.855200 kubelet[3556]: W1112 17:43:34.852993 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.855200 kubelet[3556]: E1112 17:43:34.853064 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.856019 kubelet[3556]: E1112 17:43:34.855707 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.856019 kubelet[3556]: W1112 17:43:34.855740 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.856019 kubelet[3556]: E1112 17:43:34.856174 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.857955 kubelet[3556]: I1112 17:43:34.857726 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfs2p\" (UniqueName: \"kubernetes.io/projected/f12c29ab-8a74-4cf9-a191-0b1413424edc-kube-api-access-rfs2p\") pod \"csi-node-driver-9dq4p\" (UID: \"f12c29ab-8a74-4cf9-a191-0b1413424edc\") " pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:34.860181 kubelet[3556]: E1112 17:43:34.859781 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.863262 kubelet[3556]: W1112 17:43:34.863164 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.863674 kubelet[3556]: E1112 17:43:34.860783 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.866418 kubelet[3556]: E1112 17:43:34.864703 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.867982 kubelet[3556]: E1112 17:43:34.867693 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.867982 kubelet[3556]: W1112 17:43:34.867733 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.867982 kubelet[3556]: E1112 17:43:34.867775 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.874180 kubelet[3556]: E1112 17:43:34.872870 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.874180 kubelet[3556]: W1112 17:43:34.874163 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.875466 kubelet[3556]: E1112 17:43:34.874336 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.881401 kubelet[3556]: E1112 17:43:34.881345 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.881401 kubelet[3556]: W1112 17:43:34.881384 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.881401 kubelet[3556]: E1112 17:43:34.881437 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.884117 kubelet[3556]: E1112 17:43:34.882212 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.884117 kubelet[3556]: W1112 17:43:34.882248 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.884117 kubelet[3556]: E1112 17:43:34.882285 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.884117 kubelet[3556]: E1112 17:43:34.882729 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.884117 kubelet[3556]: W1112 17:43:34.882808 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.884117 kubelet[3556]: E1112 17:43:34.882857 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.884117 kubelet[3556]: E1112 17:43:34.883381 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.884117 kubelet[3556]: W1112 17:43:34.883413 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.884117 kubelet[3556]: E1112 17:43:34.883449 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.886313 kubelet[3556]: E1112 17:43:34.885198 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.886313 kubelet[3556]: W1112 17:43:34.885245 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.886313 kubelet[3556]: E1112 17:43:34.885308 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.886313 kubelet[3556]: E1112 17:43:34.885757 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.886313 kubelet[3556]: W1112 17:43:34.885777 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.886313 kubelet[3556]: E1112 17:43:34.885962 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.886313 kubelet[3556]: E1112 17:43:34.886295 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.886313 kubelet[3556]: W1112 17:43:34.886315 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.886463 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.887046 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.893104 kubelet[3556]: W1112 17:43:34.887083 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.887272 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.887782 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.893104 kubelet[3556]: W1112 17:43:34.887839 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.888029 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.888491 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.893104 kubelet[3556]: W1112 17:43:34.888511 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.893104 kubelet[3556]: E1112 17:43:34.888749 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.890191 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.897154 kubelet[3556]: W1112 17:43:34.890219 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.890749 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.897154 kubelet[3556]: W1112 17:43:34.890773 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.891201 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.897154 kubelet[3556]: W1112 17:43:34.891222 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.891252 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.891506 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.891613 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.897154 kubelet[3556]: E1112 17:43:34.891855 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.899003 kubelet[3556]: W1112 17:43:34.891876 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.899003 kubelet[3556]: E1112 17:43:34.891919 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.899003 kubelet[3556]: E1112 17:43:34.893137 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.899003 kubelet[3556]: W1112 17:43:34.893166 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.899003 kubelet[3556]: E1112 17:43:34.893202 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.899003 kubelet[3556]: E1112 17:43:34.895413 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.899003 kubelet[3556]: W1112 17:43:34.895444 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.899003 kubelet[3556]: E1112 17:43:34.895481 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.905347 kubelet[3556]: E1112 17:43:34.903655 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.905347 kubelet[3556]: W1112 17:43:34.903689 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.905347 kubelet[3556]: E1112 17:43:34.903723 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.920466 kubelet[3556]: E1112 17:43:34.919893 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.920466 kubelet[3556]: W1112 17:43:34.919931 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.920466 kubelet[3556]: E1112 17:43:34.919969 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.959328 kubelet[3556]: E1112 17:43:34.959060 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.959328 kubelet[3556]: W1112 17:43:34.959094 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.959328 kubelet[3556]: E1112 17:43:34.959155 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.960999 kubelet[3556]: E1112 17:43:34.960804 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.961295 kubelet[3556]: W1112 17:43:34.960837 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.961295 kubelet[3556]: E1112 17:43:34.961210 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.962394 kubelet[3556]: E1112 17:43:34.962134 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.962394 kubelet[3556]: W1112 17:43:34.962163 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.962394 kubelet[3556]: E1112 17:43:34.962201 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.963694 kubelet[3556]: E1112 17:43:34.963390 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.963902 kubelet[3556]: W1112 17:43:34.963872 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.964284 kubelet[3556]: E1112 17:43:34.964127 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.965729 kubelet[3556]: E1112 17:43:34.965682 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.965729 kubelet[3556]: W1112 17:43:34.965718 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.966713 kubelet[3556]: E1112 17:43:34.966376 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.966713 kubelet[3556]: W1112 17:43:34.966408 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.966713 kubelet[3556]: E1112 17:43:34.966636 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.966713 kubelet[3556]: E1112 17:43:34.966682 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.968700 kubelet[3556]: E1112 17:43:34.968502 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.968700 kubelet[3556]: W1112 17:43:34.968691 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.970069 kubelet[3556]: E1112 17:43:34.969231 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.971485 kubelet[3556]: E1112 17:43:34.971061 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.972757 kubelet[3556]: W1112 17:43:34.971864 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.974383 kubelet[3556]: E1112 17:43:34.973505 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.977353 kubelet[3556]: E1112 17:43:34.976540 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.978178 kubelet[3556]: W1112 17:43:34.977606 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.979196 kubelet[3556]: E1112 17:43:34.978370 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.981429 kubelet[3556]: E1112 17:43:34.981043 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.981429 kubelet[3556]: W1112 17:43:34.981077 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.986651 kubelet[3556]: E1112 17:43:34.985435 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.986651 kubelet[3556]: W1112 17:43:34.985466 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.986651 kubelet[3556]: E1112 17:43:34.986041 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.986651 kubelet[3556]: E1112 17:43:34.986115 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.988122 containerd[2129]: time="2024-11-12T17:43:34.987147400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dd5f95b76-hrbhx,Uid:e73d8893-0de1-41d2-8454-177ed3d0dd66,Namespace:calico-system,Attempt:0,} returns sandbox id \"3edecf8bb0afeca74c4b37b1685e2aa81f11d128332fa62c78046e4d9c9d20e6\"" Nov 12 17:43:34.988717 kubelet[3556]: E1112 17:43:34.988352 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.988717 kubelet[3556]: W1112 17:43:34.988384 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.989459 kubelet[3556]: E1112 17:43:34.989390 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.989867 kubelet[3556]: W1112 17:43:34.989693 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.990340 kubelet[3556]: E1112 17:43:34.990314 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.991155 kubelet[3556]: W1112 17:43:34.990462 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.991874 kubelet[3556]: E1112 17:43:34.991678 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.991874 kubelet[3556]: W1112 17:43:34.991708 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.992934 kubelet[3556]: E1112 17:43:34.992460 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.992934 kubelet[3556]: W1112 17:43:34.992490 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.992934 kubelet[3556]: E1112 17:43:34.992557 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.993692 kubelet[3556]: E1112 17:43:34.993659 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.993847 kubelet[3556]: W1112 17:43:34.993820 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.994001 kubelet[3556]: E1112 17:43:34.993976 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.994138 kubelet[3556]: E1112 17:43:34.994119 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.994821 kubelet[3556]: E1112 17:43:34.994661 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.994821 kubelet[3556]: W1112 17:43:34.994688 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.994821 kubelet[3556]: E1112 17:43:34.994720 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.995648 kubelet[3556]: E1112 17:43:34.995456 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.995648 kubelet[3556]: W1112 17:43:34.995484 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.997056 kubelet[3556]: E1112 17:43:34.996428 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.997056 kubelet[3556]: E1112 17:43:34.996647 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.997056 kubelet[3556]: E1112 17:43:34.996873 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.997056 kubelet[3556]: E1112 17:43:34.996931 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:34.999165 kubelet[3556]: E1112 17:43:34.998582 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:34.999165 kubelet[3556]: W1112 17:43:34.998658 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:34.999165 kubelet[3556]: E1112 17:43:34.998696 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.001196 containerd[2129]: time="2024-11-12T17:43:35.000909732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 17:43:35.001840 kubelet[3556]: E1112 17:43:35.001697 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:35.002293 kubelet[3556]: W1112 17:43:35.002139 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:35.002607 kubelet[3556]: E1112 17:43:35.002464 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.003665 kubelet[3556]: E1112 17:43:35.003577 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:35.003665 kubelet[3556]: W1112 17:43:35.003611 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:35.004129 kubelet[3556]: E1112 17:43:35.003789 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.006251 kubelet[3556]: E1112 17:43:35.006204 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:35.006251 kubelet[3556]: W1112 17:43:35.006241 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:35.006251 kubelet[3556]: E1112 17:43:35.006293 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.011894 kubelet[3556]: E1112 17:43:35.011578 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:35.011894 kubelet[3556]: W1112 17:43:35.011614 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:35.011894 kubelet[3556]: E1112 17:43:35.011650 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.012974 kubelet[3556]: E1112 17:43:35.012815 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:35.012974 kubelet[3556]: W1112 17:43:35.012844 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:35.012974 kubelet[3556]: E1112 17:43:35.012897 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.037361 kubelet[3556]: E1112 17:43:35.037110 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:35.037361 kubelet[3556]: W1112 17:43:35.037163 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:35.037361 kubelet[3556]: E1112 17:43:35.037201 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:35.156294 containerd[2129]: time="2024-11-12T17:43:35.155360113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ns7qn,Uid:769b504f-11f1-47a0-9b2d-ad216c9fd2f8,Namespace:calico-system,Attempt:0,}" Nov 12 17:43:35.194564 containerd[2129]: time="2024-11-12T17:43:35.194203105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:35.194564 containerd[2129]: time="2024-11-12T17:43:35.194334109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:35.194564 containerd[2129]: time="2024-11-12T17:43:35.194371465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:35.195005 containerd[2129]: time="2024-11-12T17:43:35.194618521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:35.264781 containerd[2129]: time="2024-11-12T17:43:35.264663673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ns7qn,Uid:769b504f-11f1-47a0-9b2d-ad216c9fd2f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\"" Nov 12 17:43:36.889356 kubelet[3556]: E1112 17:43:36.889216 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:37.120168 containerd[2129]: time="2024-11-12T17:43:37.119893346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:37.122984 containerd[2129]: time="2024-11-12T17:43:37.122783403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584" Nov 12 17:43:37.125433 containerd[2129]: time="2024-11-12T17:43:37.125375727Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:37.130807 containerd[2129]: time="2024-11-12T17:43:37.130726299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:37.132226 containerd[2129]: time="2024-11-12T17:43:37.132152919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 2.131180739s" Nov 12 17:43:37.132317 containerd[2129]: time="2024-11-12T17:43:37.132223839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\"" Nov 12 17:43:37.133591 containerd[2129]: time="2024-11-12T17:43:37.133173771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 17:43:37.178433 containerd[2129]: time="2024-11-12T17:43:37.177967359Z" level=info msg="CreateContainer within sandbox \"3edecf8bb0afeca74c4b37b1685e2aa81f11d128332fa62c78046e4d9c9d20e6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 17:43:37.208163 containerd[2129]: time="2024-11-12T17:43:37.207960651Z" level=info msg="CreateContainer within sandbox \"3edecf8bb0afeca74c4b37b1685e2aa81f11d128332fa62c78046e4d9c9d20e6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a6d964f6025b774ef3be954f92cf21db0efaec2aa63ff54fddffde02775a632b\"" Nov 12 17:43:37.210962 containerd[2129]: time="2024-11-12T17:43:37.209490723Z" level=info msg="StartContainer for \"a6d964f6025b774ef3be954f92cf21db0efaec2aa63ff54fddffde02775a632b\"" Nov 12 17:43:37.218318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057118245.mount: Deactivated successfully. Nov 12 17:43:37.338617 containerd[2129]: time="2024-11-12T17:43:37.337258888Z" level=info msg="StartContainer for \"a6d964f6025b774ef3be954f92cf21db0efaec2aa63ff54fddffde02775a632b\" returns successfully" Nov 12 17:43:38.095463 kubelet[3556]: I1112 17:43:38.095268 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5dd5f95b76-hrbhx" podStartSLOduration=1.962649208 podStartE2EDuration="4.095205627s" podCreationTimestamp="2024-11-12 17:43:34 +0000 UTC" firstStartedPulling="2024-11-12 17:43:35.000126936 +0000 UTC m=+25.334356795" lastFinishedPulling="2024-11-12 17:43:37.132683355 +0000 UTC m=+27.466913214" observedRunningTime="2024-11-12 17:43:38.094679847 +0000 UTC m=+28.428909730" watchObservedRunningTime="2024-11-12 17:43:38.095205627 +0000 UTC m=+28.429435510" Nov 12 17:43:38.151143 kubelet[3556]: E1112 17:43:38.150700 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.151143 kubelet[3556]: W1112 17:43:38.150737 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.151143 kubelet[3556]: E1112 17:43:38.150796 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.151686 kubelet[3556]: E1112 17:43:38.151586 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.151686 kubelet[3556]: W1112 17:43:38.151609 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.152161 kubelet[3556]: E1112 17:43:38.151849 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.152862 kubelet[3556]: E1112 17:43:38.152589 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.152862 kubelet[3556]: W1112 17:43:38.152615 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.152862 kubelet[3556]: E1112 17:43:38.152644 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.153380 kubelet[3556]: E1112 17:43:38.153139 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.153380 kubelet[3556]: W1112 17:43:38.153160 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.153380 kubelet[3556]: E1112 17:43:38.153188 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.153773 kubelet[3556]: E1112 17:43:38.153751 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.153871 kubelet[3556]: W1112 17:43:38.153850 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.154107 kubelet[3556]: E1112 17:43:38.153963 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.154469 kubelet[3556]: E1112 17:43:38.154282 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.154469 kubelet[3556]: W1112 17:43:38.154302 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.154469 kubelet[3556]: E1112 17:43:38.154327 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.154906 kubelet[3556]: E1112 17:43:38.154885 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.155011 kubelet[3556]: W1112 17:43:38.154991 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.155120 kubelet[3556]: E1112 17:43:38.155100 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.155752 kubelet[3556]: E1112 17:43:38.155515 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.155752 kubelet[3556]: W1112 17:43:38.155574 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.155752 kubelet[3556]: E1112 17:43:38.155598 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.156033 kubelet[3556]: E1112 17:43:38.156014 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.156127 kubelet[3556]: W1112 17:43:38.156107 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.156234 kubelet[3556]: E1112 17:43:38.156216 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.156807 kubelet[3556]: E1112 17:43:38.156635 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.156807 kubelet[3556]: W1112 17:43:38.156656 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.156807 kubelet[3556]: E1112 17:43:38.156679 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.157089 kubelet[3556]: E1112 17:43:38.157069 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.157182 kubelet[3556]: W1112 17:43:38.157162 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.157281 kubelet[3556]: E1112 17:43:38.157263 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.157919 kubelet[3556]: E1112 17:43:38.157894 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.158236 kubelet[3556]: W1112 17:43:38.158027 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.158236 kubelet[3556]: E1112 17:43:38.158063 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.158487 kubelet[3556]: E1112 17:43:38.158467 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.158635 kubelet[3556]: W1112 17:43:38.158613 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.158774 kubelet[3556]: E1112 17:43:38.158720 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.159347 kubelet[3556]: E1112 17:43:38.159323 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.159621 kubelet[3556]: W1112 17:43:38.159484 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.159889 kubelet[3556]: E1112 17:43:38.159733 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.160182 kubelet[3556]: E1112 17:43:38.160068 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.160182 kubelet[3556]: W1112 17:43:38.160089 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.160182 kubelet[3556]: E1112 17:43:38.160112 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.204623 kubelet[3556]: E1112 17:43:38.203232 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.204623 kubelet[3556]: W1112 17:43:38.203329 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.204623 kubelet[3556]: E1112 17:43:38.203391 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.204623 kubelet[3556]: E1112 17:43:38.203962 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.204623 kubelet[3556]: W1112 17:43:38.204009 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.204623 kubelet[3556]: E1112 17:43:38.204041 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.204623 kubelet[3556]: E1112 17:43:38.204511 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.204623 kubelet[3556]: W1112 17:43:38.204588 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.204623 kubelet[3556]: E1112 17:43:38.204620 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.205166 kubelet[3556]: E1112 17:43:38.205027 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.205166 kubelet[3556]: W1112 17:43:38.205045 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.205166 kubelet[3556]: E1112 17:43:38.205076 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.205600 kubelet[3556]: E1112 17:43:38.205484 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.205600 kubelet[3556]: W1112 17:43:38.205596 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.205747 kubelet[3556]: E1112 17:43:38.205662 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.206211 kubelet[3556]: E1112 17:43:38.206053 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.206211 kubelet[3556]: W1112 17:43:38.206073 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.206211 kubelet[3556]: E1112 17:43:38.206108 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.208577 kubelet[3556]: E1112 17:43:38.206803 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.208577 kubelet[3556]: W1112 17:43:38.206833 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.208577 kubelet[3556]: E1112 17:43:38.207183 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.208577 kubelet[3556]: W1112 17:43:38.207202 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.208577 kubelet[3556]: E1112 17:43:38.207515 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.208577 kubelet[3556]: W1112 17:43:38.207578 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.208577 kubelet[3556]: E1112 17:43:38.207606 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.209576 kubelet[3556]: E1112 17:43:38.209122 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.209576 kubelet[3556]: E1112 17:43:38.209510 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.209980 kubelet[3556]: E1112 17:43:38.209956 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.210108 kubelet[3556]: W1112 17:43:38.210084 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.210322 kubelet[3556]: E1112 17:43:38.210299 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.211110 kubelet[3556]: E1112 17:43:38.210861 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.211110 kubelet[3556]: W1112 17:43:38.210888 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.211110 kubelet[3556]: E1112 17:43:38.210928 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.211708 kubelet[3556]: E1112 17:43:38.211650 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.211708 kubelet[3556]: W1112 17:43:38.211677 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.212121 kubelet[3556]: E1112 17:43:38.212076 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.212460 kubelet[3556]: E1112 17:43:38.212437 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.212704 kubelet[3556]: W1112 17:43:38.212594 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.212704 kubelet[3556]: E1112 17:43:38.212648 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.213428 kubelet[3556]: E1112 17:43:38.213263 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.213428 kubelet[3556]: W1112 17:43:38.213290 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.213428 kubelet[3556]: E1112 17:43:38.213342 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.214681 kubelet[3556]: E1112 17:43:38.214135 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.214681 kubelet[3556]: W1112 17:43:38.214163 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.214681 kubelet[3556]: E1112 17:43:38.214200 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.215172 kubelet[3556]: E1112 17:43:38.215144 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.215290 kubelet[3556]: W1112 17:43:38.215265 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.215547 kubelet[3556]: E1112 17:43:38.215508 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.216618 kubelet[3556]: E1112 17:43:38.216051 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.216618 kubelet[3556]: W1112 17:43:38.216078 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.216618 kubelet[3556]: E1112 17:43:38.216112 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.217597 kubelet[3556]: E1112 17:43:38.217503 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:43:38.217833 kubelet[3556]: W1112 17:43:38.217806 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:43:38.217955 kubelet[3556]: E1112 17:43:38.217934 3556 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:43:38.354333 containerd[2129]: time="2024-11-12T17:43:38.354191705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:38.358279 containerd[2129]: time="2024-11-12T17:43:38.358137137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816" Nov 12 17:43:38.359728 containerd[2129]: time="2024-11-12T17:43:38.359660729Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:38.365270 containerd[2129]: time="2024-11-12T17:43:38.364935977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:38.366766 containerd[2129]: time="2024-11-12T17:43:38.366700697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.233456354s" Nov 12 17:43:38.366873 containerd[2129]: time="2024-11-12T17:43:38.366764489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\"" Nov 12 17:43:38.369757 containerd[2129]: time="2024-11-12T17:43:38.369703337Z" level=info msg="CreateContainer within sandbox \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 17:43:38.395701 containerd[2129]: time="2024-11-12T17:43:38.395480861Z" level=info msg="CreateContainer within sandbox \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3aa1476bcc89dee050e34439e2ca6fa288cb07374fc140b28b2461ddec9c0730\"" Nov 12 17:43:38.397254 containerd[2129]: time="2024-11-12T17:43:38.396563369Z" level=info msg="StartContainer for \"3aa1476bcc89dee050e34439e2ca6fa288cb07374fc140b28b2461ddec9c0730\"" Nov 12 17:43:38.501092 containerd[2129]: time="2024-11-12T17:43:38.500901665Z" level=info msg="StartContainer for \"3aa1476bcc89dee050e34439e2ca6fa288cb07374fc140b28b2461ddec9c0730\" returns successfully" Nov 12 17:43:38.859881 containerd[2129]: time="2024-11-12T17:43:38.859769791Z" level=info msg="shim disconnected" id=3aa1476bcc89dee050e34439e2ca6fa288cb07374fc140b28b2461ddec9c0730 namespace=k8s.io Nov 12 17:43:38.859881 containerd[2129]: time="2024-11-12T17:43:38.859842763Z" level=warning msg="cleaning up after shim disconnected" id=3aa1476bcc89dee050e34439e2ca6fa288cb07374fc140b28b2461ddec9c0730 namespace=k8s.io Nov 12 17:43:38.859881 containerd[2129]: time="2024-11-12T17:43:38.859862143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:43:38.888930 kubelet[3556]: E1112 17:43:38.888871 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:39.080730 kubelet[3556]: I1112 17:43:39.079885 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:43:39.082810 containerd[2129]: time="2024-11-12T17:43:39.082489768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 17:43:39.150178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3aa1476bcc89dee050e34439e2ca6fa288cb07374fc140b28b2461ddec9c0730-rootfs.mount: Deactivated successfully. Nov 12 17:43:40.889294 kubelet[3556]: E1112 17:43:40.889245 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:42.889536 kubelet[3556]: E1112 17:43:42.889028 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:42.931880 containerd[2129]: time="2024-11-12T17:43:42.931803707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:42.933411 containerd[2129]: time="2024-11-12T17:43:42.933344291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517" Nov 12 17:43:42.935177 containerd[2129]: time="2024-11-12T17:43:42.935100791Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:42.941080 containerd[2129]: time="2024-11-12T17:43:42.941012399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:42.942840 containerd[2129]: time="2024-11-12T17:43:42.942651251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 3.860084083s" Nov 12 17:43:42.942840 containerd[2129]: time="2024-11-12T17:43:42.942705995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\"" Nov 12 17:43:42.947555 containerd[2129]: time="2024-11-12T17:43:42.947098907Z" level=info msg="CreateContainer within sandbox \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 17:43:42.970187 containerd[2129]: time="2024-11-12T17:43:42.970031172Z" level=info msg="CreateContainer within sandbox \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f2450a4360b941c6618749c17f3230b3e6d0bef3dcc6ae7b527e52e2b1e22a2\"" Nov 12 17:43:42.971108 containerd[2129]: time="2024-11-12T17:43:42.970793328Z" level=info msg="StartContainer for \"6f2450a4360b941c6618749c17f3230b3e6d0bef3dcc6ae7b527e52e2b1e22a2\"" Nov 12 17:43:43.082172 containerd[2129]: time="2024-11-12T17:43:43.082101896Z" level=info msg="StartContainer for \"6f2450a4360b941c6618749c17f3230b3e6d0bef3dcc6ae7b527e52e2b1e22a2\" returns successfully" Nov 12 17:43:44.076098 containerd[2129]: time="2024-11-12T17:43:44.075982365Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:43:44.124692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f2450a4360b941c6618749c17f3230b3e6d0bef3dcc6ae7b527e52e2b1e22a2-rootfs.mount: Deactivated successfully. Nov 12 17:43:44.129111 kubelet[3556]: I1112 17:43:44.124377 3556 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:43:44.179541 kubelet[3556]: I1112 17:43:44.179466 3556 topology_manager.go:215] "Topology Admit Handler" podUID="bc72a84b-bc38-4114-9563-0dae6b25af79" podNamespace="kube-system" podName="coredns-76f75df574-s5hfm" Nov 12 17:43:44.190490 kubelet[3556]: I1112 17:43:44.189503 3556 topology_manager.go:215] "Topology Admit Handler" podUID="fe47042d-34e4-43bf-869d-d51013a31508" podNamespace="kube-system" podName="coredns-76f75df574-7rgfz" Nov 12 17:43:44.218405 kubelet[3556]: I1112 17:43:44.215359 3556 topology_manager.go:215] "Topology Admit Handler" podUID="7cf10bc1-4c55-4746-b2f6-5b92d051ebc0" podNamespace="calico-apiserver" podName="calico-apiserver-58c67c9d5-bpdzx" Nov 12 17:43:44.218947 kubelet[3556]: I1112 17:43:44.218912 3556 topology_manager.go:215] "Topology Admit Handler" podUID="54f551c9-643f-46fd-bc59-e46d0d7f91ac" podNamespace="calico-system" podName="calico-kube-controllers-f549c5549-4bpts" Nov 12 17:43:44.233115 kubelet[3556]: I1112 17:43:44.232558 3556 topology_manager.go:215] "Topology Admit Handler" podUID="0ad34195-a82e-4064-b419-91cf3b5649a7" podNamespace="calico-apiserver" podName="calico-apiserver-58c67c9d5-m2vdd" Nov 12 17:43:44.250814 kubelet[3556]: I1112 17:43:44.250496 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc72a84b-bc38-4114-9563-0dae6b25af79-config-volume\") pod \"coredns-76f75df574-s5hfm\" (UID: \"bc72a84b-bc38-4114-9563-0dae6b25af79\") " pod="kube-system/coredns-76f75df574-s5hfm" Nov 12 17:43:44.250814 kubelet[3556]: I1112 17:43:44.250596 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgtm\" (UniqueName: \"kubernetes.io/projected/fe47042d-34e4-43bf-869d-d51013a31508-kube-api-access-2kgtm\") pod \"coredns-76f75df574-7rgfz\" (UID: \"fe47042d-34e4-43bf-869d-d51013a31508\") " pod="kube-system/coredns-76f75df574-7rgfz" Nov 12 17:43:44.250814 kubelet[3556]: I1112 17:43:44.250645 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjgp\" (UniqueName: \"kubernetes.io/projected/bc72a84b-bc38-4114-9563-0dae6b25af79-kube-api-access-tcjgp\") pod \"coredns-76f75df574-s5hfm\" (UID: \"bc72a84b-bc38-4114-9563-0dae6b25af79\") " pod="kube-system/coredns-76f75df574-s5hfm" Nov 12 17:43:44.250814 kubelet[3556]: I1112 17:43:44.250695 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe47042d-34e4-43bf-869d-d51013a31508-config-volume\") pod \"coredns-76f75df574-7rgfz\" (UID: \"fe47042d-34e4-43bf-869d-d51013a31508\") " pod="kube-system/coredns-76f75df574-7rgfz" Nov 12 17:43:44.353204 kubelet[3556]: I1112 17:43:44.351324 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cf10bc1-4c55-4746-b2f6-5b92d051ebc0-calico-apiserver-certs\") pod \"calico-apiserver-58c67c9d5-bpdzx\" (UID: \"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0\") " pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" Nov 12 17:43:44.353204 kubelet[3556]: I1112 17:43:44.351430 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54f551c9-643f-46fd-bc59-e46d0d7f91ac-tigera-ca-bundle\") pod \"calico-kube-controllers-f549c5549-4bpts\" (UID: \"54f551c9-643f-46fd-bc59-e46d0d7f91ac\") " pod="calico-system/calico-kube-controllers-f549c5549-4bpts" Nov 12 17:43:44.353204 kubelet[3556]: I1112 17:43:44.351508 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqcs\" (UniqueName: \"kubernetes.io/projected/0ad34195-a82e-4064-b419-91cf3b5649a7-kube-api-access-pgqcs\") pod \"calico-apiserver-58c67c9d5-m2vdd\" (UID: \"0ad34195-a82e-4064-b419-91cf3b5649a7\") " pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" Nov 12 17:43:44.353204 kubelet[3556]: I1112 17:43:44.351592 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ad34195-a82e-4064-b419-91cf3b5649a7-calico-apiserver-certs\") pod \"calico-apiserver-58c67c9d5-m2vdd\" (UID: \"0ad34195-a82e-4064-b419-91cf3b5649a7\") " pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" Nov 12 17:43:44.353204 kubelet[3556]: I1112 17:43:44.351649 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lh7\" (UniqueName: \"kubernetes.io/projected/54f551c9-643f-46fd-bc59-e46d0d7f91ac-kube-api-access-q5lh7\") pod \"calico-kube-controllers-f549c5549-4bpts\" (UID: \"54f551c9-643f-46fd-bc59-e46d0d7f91ac\") " pod="calico-system/calico-kube-controllers-f549c5549-4bpts" Nov 12 17:43:44.353659 kubelet[3556]: I1112 17:43:44.351725 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5cc\" (UniqueName: \"kubernetes.io/projected/7cf10bc1-4c55-4746-b2f6-5b92d051ebc0-kube-api-access-4c5cc\") pod \"calico-apiserver-58c67c9d5-bpdzx\" (UID: \"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0\") " pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" Nov 12 17:43:44.522703 containerd[2129]: time="2024-11-12T17:43:44.522586667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7rgfz,Uid:fe47042d-34e4-43bf-869d-d51013a31508,Namespace:kube-system,Attempt:0,}" Nov 12 17:43:44.541005 containerd[2129]: time="2024-11-12T17:43:44.540934679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s5hfm,Uid:bc72a84b-bc38-4114-9563-0dae6b25af79,Namespace:kube-system,Attempt:0,}" Nov 12 17:43:44.548776 containerd[2129]: time="2024-11-12T17:43:44.548313563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-bpdzx,Uid:7cf10bc1-4c55-4746-b2f6-5b92d051ebc0,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:43:44.559272 containerd[2129]: time="2024-11-12T17:43:44.559206491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f549c5549-4bpts,Uid:54f551c9-643f-46fd-bc59-e46d0d7f91ac,Namespace:calico-system,Attempt:0,}" Nov 12 17:43:44.570142 containerd[2129]: time="2024-11-12T17:43:44.570024935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-m2vdd,Uid:0ad34195-a82e-4064-b419-91cf3b5649a7,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:43:44.894585 containerd[2129]: time="2024-11-12T17:43:44.894244981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9dq4p,Uid:f12c29ab-8a74-4cf9-a191-0b1413424edc,Namespace:calico-system,Attempt:0,}" Nov 12 17:43:44.929985 containerd[2129]: time="2024-11-12T17:43:44.929871961Z" level=info msg="shim disconnected" id=6f2450a4360b941c6618749c17f3230b3e6d0bef3dcc6ae7b527e52e2b1e22a2 namespace=k8s.io Nov 12 17:43:44.929985 containerd[2129]: time="2024-11-12T17:43:44.929976853Z" level=warning msg="cleaning up after shim disconnected" id=6f2450a4360b941c6618749c17f3230b3e6d0bef3dcc6ae7b527e52e2b1e22a2 namespace=k8s.io Nov 12 17:43:44.930488 containerd[2129]: time="2024-11-12T17:43:44.929999437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:43:44.954211 containerd[2129]: time="2024-11-12T17:43:44.954123997Z" level=warning msg="cleanup warnings time=\"2024-11-12T17:43:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 17:43:45.166117 containerd[2129]: time="2024-11-12T17:43:45.156698398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 17:43:45.331788 containerd[2129]: time="2024-11-12T17:43:45.331705175Z" level=error msg="Failed to destroy network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.340483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d-shm.mount: Deactivated successfully. Nov 12 17:43:45.345196 containerd[2129]: time="2024-11-12T17:43:45.345012683Z" level=error msg="encountered an error cleaning up failed sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.345196 containerd[2129]: time="2024-11-12T17:43:45.345123623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s5hfm,Uid:bc72a84b-bc38-4114-9563-0dae6b25af79,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.346285 kubelet[3556]: E1112 17:43:45.345741 3556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.346285 kubelet[3556]: E1112 17:43:45.345836 3556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s5hfm" Nov 12 17:43:45.346285 kubelet[3556]: E1112 17:43:45.345877 3556 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s5hfm" Nov 12 17:43:45.347077 kubelet[3556]: E1112 17:43:45.345982 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-s5hfm_kube-system(bc72a84b-bc38-4114-9563-0dae6b25af79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-s5hfm_kube-system(bc72a84b-bc38-4114-9563-0dae6b25af79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s5hfm" podUID="bc72a84b-bc38-4114-9563-0dae6b25af79" Nov 12 17:43:45.373258 containerd[2129]: time="2024-11-12T17:43:45.369204035Z" level=error msg="Failed to destroy network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.373258 containerd[2129]: time="2024-11-12T17:43:45.370224539Z" level=error msg="encountered an error cleaning up failed sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.373258 containerd[2129]: time="2024-11-12T17:43:45.370313735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-m2vdd,Uid:0ad34195-a82e-4064-b419-91cf3b5649a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.373720 kubelet[3556]: E1112 17:43:45.372730 3556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.373720 kubelet[3556]: E1112 17:43:45.372806 3556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" Nov 12 17:43:45.373720 kubelet[3556]: E1112 17:43:45.372843 3556 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" Nov 12 17:43:45.373918 kubelet[3556]: E1112 17:43:45.372939 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58c67c9d5-m2vdd_calico-apiserver(0ad34195-a82e-4064-b419-91cf3b5649a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58c67c9d5-m2vdd_calico-apiserver(0ad34195-a82e-4064-b419-91cf3b5649a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" podUID="0ad34195-a82e-4064-b419-91cf3b5649a7" Nov 12 17:43:45.379951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52-shm.mount: Deactivated successfully. Nov 12 17:43:45.426873 containerd[2129]: time="2024-11-12T17:43:45.426699132Z" level=error msg="Failed to destroy network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.427740 containerd[2129]: time="2024-11-12T17:43:45.427484808Z" level=error msg="Failed to destroy network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.431413 containerd[2129]: time="2024-11-12T17:43:45.430919304Z" level=error msg="encountered an error cleaning up failed sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.434684 containerd[2129]: time="2024-11-12T17:43:45.434623896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f549c5549-4bpts,Uid:54f551c9-643f-46fd-bc59-e46d0d7f91ac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.435031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1-shm.mount: Deactivated successfully. Nov 12 17:43:45.440938 containerd[2129]: time="2024-11-12T17:43:45.433753944Z" level=error msg="encountered an error cleaning up failed sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.443667 containerd[2129]: time="2024-11-12T17:43:45.441128232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-bpdzx,Uid:7cf10bc1-4c55-4746-b2f6-5b92d051ebc0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.443733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b-shm.mount: Deactivated successfully. Nov 12 17:43:45.444291 kubelet[3556]: E1112 17:43:45.444236 3556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.444428 kubelet[3556]: E1112 17:43:45.444321 3556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f549c5549-4bpts" Nov 12 17:43:45.444428 kubelet[3556]: E1112 17:43:45.444361 3556 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f549c5549-4bpts" Nov 12 17:43:45.445626 containerd[2129]: time="2024-11-12T17:43:45.445351824Z" level=error msg="Failed to destroy network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.448577 kubelet[3556]: E1112 17:43:45.447441 3556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.448577 kubelet[3556]: E1112 17:43:45.447571 3556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" Nov 12 17:43:45.448577 kubelet[3556]: E1112 17:43:45.447623 3556 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" Nov 12 17:43:45.448832 kubelet[3556]: E1112 17:43:45.447706 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58c67c9d5-bpdzx_calico-apiserver(7cf10bc1-4c55-4746-b2f6-5b92d051ebc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58c67c9d5-bpdzx_calico-apiserver(7cf10bc1-4c55-4746-b2f6-5b92d051ebc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" podUID="7cf10bc1-4c55-4746-b2f6-5b92d051ebc0" Nov 12 17:43:45.448950 kubelet[3556]: E1112 17:43:45.448839 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f549c5549-4bpts_calico-system(54f551c9-643f-46fd-bc59-e46d0d7f91ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f549c5549-4bpts_calico-system(54f551c9-643f-46fd-bc59-e46d0d7f91ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f549c5549-4bpts" podUID="54f551c9-643f-46fd-bc59-e46d0d7f91ac" Nov 12 17:43:45.449063 containerd[2129]: time="2024-11-12T17:43:45.448969260Z" level=error msg="encountered an error cleaning up failed sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.449149 containerd[2129]: time="2024-11-12T17:43:45.449054244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9dq4p,Uid:f12c29ab-8a74-4cf9-a191-0b1413424edc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.451351 kubelet[3556]: E1112 17:43:45.450667 3556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.451351 kubelet[3556]: E1112 17:43:45.450752 3556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:45.451351 kubelet[3556]: E1112 17:43:45.450791 3556 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9dq4p" Nov 12 17:43:45.451777 kubelet[3556]: E1112 17:43:45.450871 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9dq4p_calico-system(f12c29ab-8a74-4cf9-a191-0b1413424edc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9dq4p_calico-system(f12c29ab-8a74-4cf9-a191-0b1413424edc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:45.459956 containerd[2129]: time="2024-11-12T17:43:45.459873408Z" level=error msg="Failed to destroy network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.460708 containerd[2129]: time="2024-11-12T17:43:45.460645188Z" level=error msg="encountered an error cleaning up failed sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.461553 containerd[2129]: time="2024-11-12T17:43:45.460765716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7rgfz,Uid:fe47042d-34e4-43bf-869d-d51013a31508,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.462296 kubelet[3556]: E1112 17:43:45.462240 3556 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:45.462413 kubelet[3556]: E1112 17:43:45.462327 3556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7rgfz" Nov 12 17:43:45.462413 kubelet[3556]: E1112 17:43:45.462370 3556 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7rgfz" Nov 12 17:43:45.462622 kubelet[3556]: E1112 17:43:45.462451 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7rgfz_kube-system(fe47042d-34e4-43bf-869d-d51013a31508)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7rgfz_kube-system(fe47042d-34e4-43bf-869d-d51013a31508)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7rgfz" podUID="fe47042d-34e4-43bf-869d-d51013a31508" Nov 12 17:43:46.119865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79-shm.mount: Deactivated successfully. Nov 12 17:43:46.120202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c-shm.mount: Deactivated successfully. Nov 12 17:43:46.144877 kubelet[3556]: I1112 17:43:46.143647 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:43:46.147809 containerd[2129]: time="2024-11-12T17:43:46.147757931Z" level=info msg="StopPodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\"" Nov 12 17:43:46.148363 containerd[2129]: time="2024-11-12T17:43:46.148277243Z" level=info msg="Ensure that sandbox 554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79 in task-service has been cleanup successfully" Nov 12 17:43:46.149991 kubelet[3556]: I1112 17:43:46.149951 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:43:46.151575 containerd[2129]: time="2024-11-12T17:43:46.151162103Z" level=info msg="StopPodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\"" Nov 12 17:43:46.151575 containerd[2129]: time="2024-11-12T17:43:46.151457759Z" level=info msg="Ensure that sandbox 4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b in task-service has been cleanup successfully" Nov 12 17:43:46.166255 kubelet[3556]: I1112 17:43:46.166207 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:43:46.168635 containerd[2129]: time="2024-11-12T17:43:46.167477831Z" level=info msg="StopPodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\"" Nov 12 17:43:46.171537 containerd[2129]: time="2024-11-12T17:43:46.171183791Z" level=info msg="Ensure that sandbox 438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1 in task-service has been cleanup successfully" Nov 12 17:43:46.174739 kubelet[3556]: I1112 17:43:46.174698 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:43:46.179964 containerd[2129]: time="2024-11-12T17:43:46.179747471Z" level=info msg="StopPodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\"" Nov 12 17:43:46.180799 containerd[2129]: time="2024-11-12T17:43:46.180717107Z" level=info msg="Ensure that sandbox 371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c in task-service has been cleanup successfully" Nov 12 17:43:46.182399 kubelet[3556]: I1112 17:43:46.182065 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:43:46.186079 containerd[2129]: time="2024-11-12T17:43:46.185924124Z" level=info msg="StopPodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\"" Nov 12 17:43:46.186305 containerd[2129]: time="2024-11-12T17:43:46.186251436Z" level=info msg="Ensure that sandbox ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52 in task-service has been cleanup successfully" Nov 12 17:43:46.195828 kubelet[3556]: I1112 17:43:46.195305 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:43:46.201154 containerd[2129]: time="2024-11-12T17:43:46.201098580Z" level=info msg="StopPodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\"" Nov 12 17:43:46.205895 containerd[2129]: time="2024-11-12T17:43:46.205630644Z" level=info msg="Ensure that sandbox d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d in task-service has been cleanup successfully" Nov 12 17:43:46.321942 containerd[2129]: time="2024-11-12T17:43:46.321381600Z" level=error msg="StopPodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" failed" error="failed to destroy network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:46.322655 kubelet[3556]: E1112 17:43:46.322289 3556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:43:46.322655 kubelet[3556]: E1112 17:43:46.322601 3556 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79"} Nov 12 17:43:46.323260 kubelet[3556]: E1112 17:43:46.322936 3556 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f12c29ab-8a74-4cf9-a191-0b1413424edc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:43:46.323260 kubelet[3556]: E1112 17:43:46.323030 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f12c29ab-8a74-4cf9-a191-0b1413424edc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9dq4p" podUID="f12c29ab-8a74-4cf9-a191-0b1413424edc" Nov 12 17:43:46.351779 containerd[2129]: time="2024-11-12T17:43:46.351713208Z" level=error msg="StopPodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" failed" error="failed to destroy network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:46.352422 kubelet[3556]: E1112 17:43:46.352180 3556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:43:46.352422 kubelet[3556]: E1112 17:43:46.352245 3556 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1"} Nov 12 17:43:46.352422 kubelet[3556]: E1112 17:43:46.352325 3556 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:43:46.352422 kubelet[3556]: E1112 17:43:46.352381 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" podUID="7cf10bc1-4c55-4746-b2f6-5b92d051ebc0" Nov 12 17:43:46.357387 containerd[2129]: time="2024-11-12T17:43:46.357301896Z" level=error msg="StopPodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" failed" error="failed to destroy network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:46.358256 kubelet[3556]: E1112 17:43:46.357901 3556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:43:46.358256 kubelet[3556]: E1112 17:43:46.358030 3556 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52"} Nov 12 17:43:46.358256 kubelet[3556]: E1112 17:43:46.358117 3556 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ad34195-a82e-4064-b419-91cf3b5649a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:43:46.358256 kubelet[3556]: E1112 17:43:46.358199 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ad34195-a82e-4064-b419-91cf3b5649a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" podUID="0ad34195-a82e-4064-b419-91cf3b5649a7" Nov 12 17:43:46.385688 containerd[2129]: time="2024-11-12T17:43:46.384856885Z" level=error msg="StopPodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" failed" error="failed to destroy network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:46.385904 kubelet[3556]: E1112 17:43:46.385330 3556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:43:46.385904 kubelet[3556]: E1112 17:43:46.385390 3556 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b"} Nov 12 17:43:46.385904 kubelet[3556]: E1112 17:43:46.385460 3556 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54f551c9-643f-46fd-bc59-e46d0d7f91ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:43:46.385904 kubelet[3556]: E1112 17:43:46.385512 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54f551c9-643f-46fd-bc59-e46d0d7f91ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f549c5549-4bpts" podUID="54f551c9-643f-46fd-bc59-e46d0d7f91ac" Nov 12 17:43:46.418020 containerd[2129]: time="2024-11-12T17:43:46.417943033Z" level=error msg="StopPodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" failed" error="failed to destroy network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:46.418925 kubelet[3556]: E1112 17:43:46.418751 3556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:43:46.419486 kubelet[3556]: E1112 17:43:46.419141 3556 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c"} Nov 12 17:43:46.419486 kubelet[3556]: E1112 17:43:46.419439 3556 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe47042d-34e4-43bf-869d-d51013a31508\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:43:46.419924 kubelet[3556]: E1112 17:43:46.419628 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe47042d-34e4-43bf-869d-d51013a31508\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7rgfz" podUID="fe47042d-34e4-43bf-869d-d51013a31508" Nov 12 17:43:46.427575 containerd[2129]: time="2024-11-12T17:43:46.427188301Z" level=error msg="StopPodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" failed" error="failed to destroy network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:43:46.427756 kubelet[3556]: E1112 17:43:46.427565 3556 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:43:46.427756 kubelet[3556]: E1112 17:43:46.427629 3556 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d"} Nov 12 17:43:46.427756 kubelet[3556]: E1112 17:43:46.427693 3556 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc72a84b-bc38-4114-9563-0dae6b25af79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:43:46.427756 kubelet[3556]: E1112 17:43:46.427748 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc72a84b-bc38-4114-9563-0dae6b25af79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s5hfm" podUID="bc72a84b-bc38-4114-9563-0dae6b25af79" Nov 12 17:43:47.263676 kubelet[3556]: I1112 17:43:47.263610 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:43:51.300114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862338964.mount: Deactivated successfully. Nov 12 17:43:51.355618 containerd[2129]: time="2024-11-12T17:43:51.355189649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:51.357374 containerd[2129]: time="2024-11-12T17:43:51.357113621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328" Nov 12 17:43:51.358724 containerd[2129]: time="2024-11-12T17:43:51.358628213Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:51.363895 containerd[2129]: time="2024-11-12T17:43:51.363821945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:43:51.365092 containerd[2129]: time="2024-11-12T17:43:51.364742309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 6.196166011s" Nov 12 17:43:51.365092 containerd[2129]: time="2024-11-12T17:43:51.364801613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\"" Nov 12 17:43:51.399339 containerd[2129]: time="2024-11-12T17:43:51.397571069Z" level=info msg="CreateContainer within sandbox \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 17:43:51.440636 containerd[2129]: time="2024-11-12T17:43:51.439770798Z" level=info msg="CreateContainer within sandbox \"7e19021c602cf00f899ab0bb176096cb369e1c951a59ba4b7e40fc48355fab69\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5d52ef16b8341c1f69f7ceb568f028c6672535a095b8c6b2bf97f0a4c3db943\"" Nov 12 17:43:51.443676 containerd[2129]: time="2024-11-12T17:43:51.442442610Z" level=info msg="StartContainer for \"d5d52ef16b8341c1f69f7ceb568f028c6672535a095b8c6b2bf97f0a4c3db943\"" Nov 12 17:43:51.445645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484195552.mount: Deactivated successfully. Nov 12 17:43:51.557847 containerd[2129]: time="2024-11-12T17:43:51.557682486Z" level=info msg="StartContainer for \"d5d52ef16b8341c1f69f7ceb568f028c6672535a095b8c6b2bf97f0a4c3db943\" returns successfully" Nov 12 17:43:51.716945 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 17:43:51.717077 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 17:43:52.251023 kubelet[3556]: I1112 17:43:52.250963 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ns7qn" podStartSLOduration=2.152674226 podStartE2EDuration="18.25090401s" podCreationTimestamp="2024-11-12 17:43:34 +0000 UTC" firstStartedPulling="2024-11-12 17:43:35.266968189 +0000 UTC m=+25.601198048" lastFinishedPulling="2024-11-12 17:43:51.365197961 +0000 UTC m=+41.699427832" observedRunningTime="2024-11-12 17:43:52.250252326 +0000 UTC m=+42.584482209" watchObservedRunningTime="2024-11-12 17:43:52.25090401 +0000 UTC m=+42.585133857" Nov 12 17:43:53.966558 kernel: bpftool[4846]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 17:43:54.459374 (udev-worker)[4656]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:43:54.484690 systemd-networkd[1687]: vxlan.calico: Link UP Nov 12 17:43:54.488610 systemd-networkd[1687]: vxlan.calico: Gained carrier Nov 12 17:43:54.532836 (udev-worker)[4661]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:43:55.511068 systemd[1]: Started sshd@7-172.31.27.95:22-139.178.89.65:54124.service - OpenSSH per-connection server daemon (139.178.89.65:54124). Nov 12 17:43:55.690819 sshd[4920]: Accepted publickey for core from 139.178.89.65 port 54124 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:55.693990 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:55.703086 systemd-logind[2095]: New session 8 of user core. Nov 12 17:43:55.711071 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:43:55.983850 sshd[4920]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:55.990389 systemd[1]: sshd@7-172.31.27.95:22-139.178.89.65:54124.service: Deactivated successfully. Nov 12 17:43:55.990938 systemd-logind[2095]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:43:55.999094 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:43:56.002481 systemd-logind[2095]: Removed session 8. Nov 12 17:43:56.313189 systemd-networkd[1687]: vxlan.calico: Gained IPv6LL Nov 12 17:43:57.891011 containerd[2129]: time="2024-11-12T17:43:57.890962814Z" level=info msg="StopPodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\"" Nov 12 17:43:57.895291 containerd[2129]: time="2024-11-12T17:43:57.892079162Z" level=info msg="StopPodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\"" Nov 12 17:43:57.899904 containerd[2129]: time="2024-11-12T17:43:57.899780174Z" level=info msg="StopPodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\"" Nov 12 17:43:57.901436 containerd[2129]: time="2024-11-12T17:43:57.900229826Z" level=info msg="StopPodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\"" Nov 12 17:43:57.902873 containerd[2129]: time="2024-11-12T17:43:57.901790906Z" level=info msg="StopPodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\"" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.213 [INFO][5002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.214 [INFO][5002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" iface="eth0" netns="/var/run/netns/cni-6c92e9ce-4bb8-6cc3-4fe8-022ce02fe9e7" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.214 [INFO][5002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" iface="eth0" netns="/var/run/netns/cni-6c92e9ce-4bb8-6cc3-4fe8-022ce02fe9e7" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.216 [INFO][5002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" iface="eth0" netns="/var/run/netns/cni-6c92e9ce-4bb8-6cc3-4fe8-022ce02fe9e7" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.216 [INFO][5002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.216 [INFO][5002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.322 [INFO][5043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.326 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.326 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.350 [WARNING][5043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.351 [INFO][5043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.359 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:58.379973 containerd[2129]: 2024-11-12 17:43:58.369 [INFO][5002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:43:58.383025 containerd[2129]: time="2024-11-12T17:43:58.382971312Z" level=info msg="TearDown network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" successfully" Nov 12 17:43:58.383206 containerd[2129]: time="2024-11-12T17:43:58.383177304Z" level=info msg="StopPodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" returns successfully" Nov 12 17:43:58.391829 containerd[2129]: time="2024-11-12T17:43:58.391767540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-m2vdd,Uid:0ad34195-a82e-4064-b419-91cf3b5649a7,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:43:58.393776 systemd[1]: run-netns-cni\x2d6c92e9ce\x2d4bb8\x2d6cc3\x2d4fe8\x2d022ce02fe9e7.mount: Deactivated successfully. Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.195 [INFO][5005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.196 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" iface="eth0" netns="/var/run/netns/cni-c6afbdc5-6e58-7c7f-8ddd-123e21c29b68" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.204 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" iface="eth0" netns="/var/run/netns/cni-c6afbdc5-6e58-7c7f-8ddd-123e21c29b68" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.212 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" iface="eth0" netns="/var/run/netns/cni-c6afbdc5-6e58-7c7f-8ddd-123e21c29b68" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.213 [INFO][5005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.213 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.360 [INFO][5036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.361 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.361 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.393 [WARNING][5036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.393 [INFO][5036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.399 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:58.415996 containerd[2129]: 2024-11-12 17:43:58.412 [INFO][5005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:43:58.417782 containerd[2129]: time="2024-11-12T17:43:58.417493524Z" level=info msg="TearDown network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" successfully" Nov 12 17:43:58.417782 containerd[2129]: time="2024-11-12T17:43:58.417612156Z" level=info msg="StopPodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" returns successfully" Nov 12 17:43:58.425624 containerd[2129]: time="2024-11-12T17:43:58.424853844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f549c5549-4bpts,Uid:54f551c9-643f-46fd-bc59-e46d0d7f91ac,Namespace:calico-system,Attempt:1,}" Nov 12 17:43:58.431975 systemd[1]: run-netns-cni\x2dc6afbdc5\x2d6e58\x2d7c7f\x2d8ddd\x2d123e21c29b68.mount: Deactivated successfully. Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.206 [INFO][5003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.206 [INFO][5003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" iface="eth0" netns="/var/run/netns/cni-a483755b-4cff-c63b-6937-3a32e0318831" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.207 [INFO][5003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" iface="eth0" netns="/var/run/netns/cni-a483755b-4cff-c63b-6937-3a32e0318831" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.213 [INFO][5003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" iface="eth0" netns="/var/run/netns/cni-a483755b-4cff-c63b-6937-3a32e0318831" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.213 [INFO][5003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.213 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.367 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.368 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.399 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.422 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.422 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.429 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:58.483867 containerd[2129]: 2024-11-12 17:43:58.449 [INFO][5003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:43:58.488595 containerd[2129]: time="2024-11-12T17:43:58.487672429Z" level=info msg="TearDown network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" successfully" Nov 12 17:43:58.488595 containerd[2129]: time="2024-11-12T17:43:58.487740121Z" level=info msg="StopPodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" returns successfully" Nov 12 17:43:58.490388 containerd[2129]: time="2024-11-12T17:43:58.489744709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s5hfm,Uid:bc72a84b-bc38-4114-9563-0dae6b25af79,Namespace:kube-system,Attempt:1,}" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.167 [INFO][5006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.170 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" iface="eth0" netns="/var/run/netns/cni-30a592e8-ece2-f1bd-2ef4-da59efd58419" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.180 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" iface="eth0" netns="/var/run/netns/cni-30a592e8-ece2-f1bd-2ef4-da59efd58419" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.185 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" iface="eth0" netns="/var/run/netns/cni-30a592e8-ece2-f1bd-2ef4-da59efd58419" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.186 [INFO][5006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.186 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.382 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.382 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.431 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.470 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.470 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.490 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:58.520366 containerd[2129]: 2024-11-12 17:43:58.508 [INFO][5006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:43:58.520366 containerd[2129]: time="2024-11-12T17:43:58.520182853Z" level=info msg="TearDown network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" successfully" Nov 12 17:43:58.520366 containerd[2129]: time="2024-11-12T17:43:58.520226521Z" level=info msg="StopPodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" returns successfully" Nov 12 17:43:58.523559 containerd[2129]: time="2024-11-12T17:43:58.522651505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7rgfz,Uid:fe47042d-34e4-43bf-869d-d51013a31508,Namespace:kube-system,Attempt:1,}" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.181 [INFO][5004] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.182 [INFO][5004] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" iface="eth0" netns="/var/run/netns/cni-be6750c5-678e-6b88-001c-b9c74a56472b" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.183 [INFO][5004] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" iface="eth0" netns="/var/run/netns/cni-be6750c5-678e-6b88-001c-b9c74a56472b" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.186 [INFO][5004] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" iface="eth0" netns="/var/run/netns/cni-be6750c5-678e-6b88-001c-b9c74a56472b" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.186 [INFO][5004] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.186 [INFO][5004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.407 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.408 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.487 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.512 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.513 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.516 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:58.530458 containerd[2129]: 2024-11-12 17:43:58.523 [INFO][5004] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:43:58.531774 containerd[2129]: time="2024-11-12T17:43:58.531401113Z" level=info msg="TearDown network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" successfully" Nov 12 17:43:58.531774 containerd[2129]: time="2024-11-12T17:43:58.531450961Z" level=info msg="StopPodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" returns successfully" Nov 12 17:43:58.535799 containerd[2129]: time="2024-11-12T17:43:58.535381345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-bpdzx,Uid:7cf10bc1-4c55-4746-b2f6-5b92d051ebc0,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:43:59.003489 (udev-worker)[5149]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:43:59.007228 systemd-networkd[1687]: calicc40397f12d: Link UP Nov 12 17:43:59.008738 systemd-networkd[1687]: calicc40397f12d: Gained carrier Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.647 [INFO][5076] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0 calico-kube-controllers-f549c5549- calico-system 54f551c9-643f-46fd-bc59-e46d0d7f91ac 824 0 2024-11-12 17:43:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f549c5549 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-95 calico-kube-controllers-f549c5549-4bpts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc40397f12d [] []}} ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.648 [INFO][5076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.808 [INFO][5122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" HandleID="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.879 [INFO][5122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" HandleID="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102000), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-95", "pod":"calico-kube-controllers-f549c5549-4bpts", "timestamp":"2024-11-12 17:43:58.808166354 +0000 UTC"}, Hostname:"ip-172-31-27-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.879 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.879 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.879 [INFO][5122] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-95' Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.886 [INFO][5122] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.899 [INFO][5122] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.918 [INFO][5122] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.927 [INFO][5122] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.933 [INFO][5122] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.934 [INFO][5122] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.940 [INFO][5122] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.950 [INFO][5122] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.971 [INFO][5122] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.129/26] block=192.168.110.128/26 handle="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.972 [INFO][5122] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.129/26] handle="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" host="ip-172-31-27-95" Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.973 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:59.061555 containerd[2129]: 2024-11-12 17:43:58.973 [INFO][5122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.129/26] IPv6=[] ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" HandleID="k8s-pod-network.c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.063217 containerd[2129]: 2024-11-12 17:43:58.982 [INFO][5076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0", GenerateName:"calico-kube-controllers-f549c5549-", Namespace:"calico-system", SelfLink:"", UID:"54f551c9-643f-46fd-bc59-e46d0d7f91ac", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f549c5549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"", Pod:"calico-kube-controllers-f549c5549-4bpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc40397f12d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.063217 containerd[2129]: 2024-11-12 17:43:58.983 [INFO][5076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.129/32] ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.063217 containerd[2129]: 2024-11-12 17:43:58.983 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc40397f12d ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.063217 containerd[2129]: 2024-11-12 17:43:59.010 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.063217 containerd[2129]: 2024-11-12 17:43:59.015 [INFO][5076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0", GenerateName:"calico-kube-controllers-f549c5549-", Namespace:"calico-system", SelfLink:"", UID:"54f551c9-643f-46fd-bc59-e46d0d7f91ac", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f549c5549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c", Pod:"calico-kube-controllers-f549c5549-4bpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc40397f12d", MAC:"62:dc:80:32:0f:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.063217 containerd[2129]: 2024-11-12 17:43:59.052 [INFO][5076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c" Namespace="calico-system" Pod="calico-kube-controllers-f549c5549-4bpts" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:43:59.181117 systemd-networkd[1687]: cali9d03a0b6f23: Link UP Nov 12 17:43:59.185681 systemd-networkd[1687]: cali9d03a0b6f23: Gained carrier Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.618 [INFO][5067] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0 calico-apiserver-58c67c9d5- calico-apiserver 0ad34195-a82e-4064-b419-91cf3b5649a7 826 0 2024-11-12 17:43:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58c67c9d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-95 calico-apiserver-58c67c9d5-m2vdd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d03a0b6f23 [] []}} ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.619 [INFO][5067] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.913 [INFO][5121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" HandleID="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.941 [INFO][5121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" HandleID="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033b180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-95", "pod":"calico-apiserver-58c67c9d5-m2vdd", "timestamp":"2024-11-12 17:43:58.913378707 +0000 UTC"}, Hostname:"ip-172-31-27-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.941 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.973 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.973 [INFO][5121] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-95' Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:58.979 [INFO][5121] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.003 [INFO][5121] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.045 [INFO][5121] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.054 [INFO][5121] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.085 [INFO][5121] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.086 [INFO][5121] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.100 [INFO][5121] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2 Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.119 [INFO][5121] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.137 [INFO][5121] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.130/26] block=192.168.110.128/26 handle="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.138 [INFO][5121] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.130/26] handle="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" host="ip-172-31-27-95" Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.138 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:59.233928 containerd[2129]: 2024-11-12 17:43:59.138 [INFO][5121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.130/26] IPv6=[] ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" HandleID="k8s-pod-network.eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.235100 containerd[2129]: 2024-11-12 17:43:59.157 [INFO][5067] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad34195-a82e-4064-b419-91cf3b5649a7", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"", Pod:"calico-apiserver-58c67c9d5-m2vdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d03a0b6f23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.235100 containerd[2129]: 2024-11-12 17:43:59.158 [INFO][5067] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.130/32] ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.235100 containerd[2129]: 2024-11-12 17:43:59.158 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d03a0b6f23 ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.235100 containerd[2129]: 2024-11-12 17:43:59.194 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.235100 containerd[2129]: 2024-11-12 17:43:59.199 [INFO][5067] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad34195-a82e-4064-b419-91cf3b5649a7", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2", Pod:"calico-apiserver-58c67c9d5-m2vdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d03a0b6f23", MAC:"7a:3d:46:43:10:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.235100 containerd[2129]: 2024-11-12 17:43:59.225 [INFO][5067] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-m2vdd" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:43:59.271865 containerd[2129]: time="2024-11-12T17:43:59.270080761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:59.275158 containerd[2129]: time="2024-11-12T17:43:59.272908489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:59.275158 containerd[2129]: time="2024-11-12T17:43:59.274251037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.281492 containerd[2129]: time="2024-11-12T17:43:59.281362189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.290184 systemd-networkd[1687]: calieec26932ad2: Link UP Nov 12 17:43:59.292799 systemd-networkd[1687]: calieec26932ad2: Gained carrier Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:58.731 [INFO][5110] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0 coredns-76f75df574- kube-system fe47042d-34e4-43bf-869d-d51013a31508 822 0 2024-11-12 17:43:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-95 coredns-76f75df574-7rgfz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieec26932ad2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:58.732 [INFO][5110] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.056 [INFO][5135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" HandleID="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.105 [INFO][5135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" HandleID="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039d960), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-95", "pod":"coredns-76f75df574-7rgfz", "timestamp":"2024-11-12 17:43:59.056886299 +0000 UTC"}, Hostname:"ip-172-31-27-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.105 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.138 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.140 [INFO][5135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-95' Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.145 [INFO][5135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.161 [INFO][5135] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.188 [INFO][5135] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.200 [INFO][5135] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.210 [INFO][5135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.210 [INFO][5135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.214 [INFO][5135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7 Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.231 [INFO][5135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.253 [INFO][5135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.131/26] block=192.168.110.128/26 handle="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.253 [INFO][5135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.131/26] handle="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" host="ip-172-31-27-95" Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.255 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:59.340814 containerd[2129]: 2024-11-12 17:43:59.255 [INFO][5135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.131/26] IPv6=[] ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" HandleID="k8s-pod-network.cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.342153 containerd[2129]: 2024-11-12 17:43:59.269 [INFO][5110] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe47042d-34e4-43bf-869d-d51013a31508", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"", Pod:"coredns-76f75df574-7rgfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieec26932ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.342153 containerd[2129]: 2024-11-12 17:43:59.274 [INFO][5110] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.131/32] ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.342153 containerd[2129]: 2024-11-12 17:43:59.278 [INFO][5110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieec26932ad2 ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.342153 containerd[2129]: 2024-11-12 17:43:59.293 [INFO][5110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.342153 containerd[2129]: 2024-11-12 17:43:59.297 [INFO][5110] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe47042d-34e4-43bf-869d-d51013a31508", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7", Pod:"coredns-76f75df574-7rgfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieec26932ad2", MAC:"16:1f:e0:78:ca:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.342153 containerd[2129]: 2024-11-12 17:43:59.330 [INFO][5110] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7" Namespace="kube-system" Pod="coredns-76f75df574-7rgfz" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:43:59.409192 systemd[1]: run-netns-cni\x2dbe6750c5\x2d678e\x2d6b88\x2d001c\x2db9c74a56472b.mount: Deactivated successfully. Nov 12 17:43:59.409558 systemd[1]: run-netns-cni\x2da483755b\x2d4cff\x2dc63b\x2d6937\x2d3a32e0318831.mount: Deactivated successfully. Nov 12 17:43:59.409774 systemd[1]: run-netns-cni\x2d30a592e8\x2dece2\x2df1bd\x2d2ef4\x2dda59efd58419.mount: Deactivated successfully. Nov 12 17:43:59.444843 containerd[2129]: time="2024-11-12T17:43:59.440820853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:59.444843 containerd[2129]: time="2024-11-12T17:43:59.440926453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:59.444843 containerd[2129]: time="2024-11-12T17:43:59.440953261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.444843 containerd[2129]: time="2024-11-12T17:43:59.441164089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.450183 systemd-networkd[1687]: calif14d8d6e697: Link UP Nov 12 17:43:59.452974 systemd-networkd[1687]: calif14d8d6e697: Gained carrier Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:58.831 [INFO][5099] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0 calico-apiserver-58c67c9d5- calico-apiserver 7cf10bc1-4c55-4746-b2f6-5b92d051ebc0 823 0 2024-11-12 17:43:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58c67c9d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-95 calico-apiserver-58c67c9d5-bpdzx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif14d8d6e697 [] []}} ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:58.834 [INFO][5099] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.048 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" HandleID="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.107 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" HandleID="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011bc20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-95", "pod":"calico-apiserver-58c67c9d5-bpdzx", "timestamp":"2024-11-12 17:43:59.048243491 +0000 UTC"}, Hostname:"ip-172-31-27-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.110 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.254 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.254 [INFO][5139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-95' Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.262 [INFO][5139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.279 [INFO][5139] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.312 [INFO][5139] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.332 [INFO][5139] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.343 [INFO][5139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.347 [INFO][5139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.355 [INFO][5139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1 Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.370 [INFO][5139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.396 [INFO][5139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.132/26] block=192.168.110.128/26 handle="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.396 [INFO][5139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.132/26] handle="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" host="ip-172-31-27-95" Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.396 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:59.521372 containerd[2129]: 2024-11-12 17:43:59.396 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.132/26] IPv6=[] ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" HandleID="k8s-pod-network.e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.523150 containerd[2129]: 2024-11-12 17:43:59.423 [INFO][5099] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"", Pod:"calico-apiserver-58c67c9d5-bpdzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif14d8d6e697", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.523150 containerd[2129]: 2024-11-12 17:43:59.425 [INFO][5099] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.132/32] ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.523150 containerd[2129]: 2024-11-12 17:43:59.426 [INFO][5099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif14d8d6e697 ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.523150 containerd[2129]: 2024-11-12 17:43:59.454 [INFO][5099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.523150 containerd[2129]: 2024-11-12 17:43:59.466 [INFO][5099] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1", Pod:"calico-apiserver-58c67c9d5-bpdzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif14d8d6e697", MAC:"32:fc:cf:76:5b:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.523150 containerd[2129]: 2024-11-12 17:43:59.509 [INFO][5099] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1" Namespace="calico-apiserver" Pod="calico-apiserver-58c67c9d5-bpdzx" WorkloadEndpoint="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:43:59.605811 systemd-networkd[1687]: caliad6b64c2dad: Link UP Nov 12 17:43:59.608733 systemd-networkd[1687]: caliad6b64c2dad: Gained carrier Nov 12 17:43:59.639577 containerd[2129]: time="2024-11-12T17:43:59.639018782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:59.640147 containerd[2129]: time="2024-11-12T17:43:59.639499250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:59.640836 containerd[2129]: time="2024-11-12T17:43:59.640393982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.645499 containerd[2129]: time="2024-11-12T17:43:59.645213314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.686055 containerd[2129]: time="2024-11-12T17:43:59.685490199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f549c5549-4bpts,Uid:54f551c9-643f-46fd-bc59-e46d0d7f91ac,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c\"" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:58.839 [INFO][5087] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0 coredns-76f75df574- kube-system bc72a84b-bc38-4114-9563-0dae6b25af79 825 0 2024-11-12 17:43:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-95 coredns-76f75df574-s5hfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad6b64c2dad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:58.839 [INFO][5087] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.155 [INFO][5145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" HandleID="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.218 [INFO][5145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" HandleID="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebe00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-95", "pod":"coredns-76f75df574-s5hfm", "timestamp":"2024-11-12 17:43:59.15445884 +0000 UTC"}, Hostname:"ip-172-31-27-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.218 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.396 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.397 [INFO][5145] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-95' Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.407 [INFO][5145] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.445 [INFO][5145] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.471 [INFO][5145] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.478 [INFO][5145] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.485 [INFO][5145] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.486 [INFO][5145] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.507 [INFO][5145] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044 Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.524 [INFO][5145] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.541 [INFO][5145] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.133/26] block=192.168.110.128/26 handle="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.542 [INFO][5145] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.133/26] handle="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" host="ip-172-31-27-95" Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.544 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:43:59.690668 containerd[2129]: 2024-11-12 17:43:59.544 [INFO][5145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.133/26] IPv6=[] ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" HandleID="k8s-pod-network.ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.693049 containerd[2129]: 2024-11-12 17:43:59.561 [INFO][5087] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bc72a84b-bc38-4114-9563-0dae6b25af79", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"", Pod:"coredns-76f75df574-s5hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad6b64c2dad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.693049 containerd[2129]: 2024-11-12 17:43:59.561 [INFO][5087] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.133/32] ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.693049 containerd[2129]: 2024-11-12 17:43:59.562 [INFO][5087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad6b64c2dad ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.693049 containerd[2129]: 2024-11-12 17:43:59.606 [INFO][5087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.693049 containerd[2129]: 2024-11-12 17:43:59.610 [INFO][5087] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bc72a84b-bc38-4114-9563-0dae6b25af79", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044", Pod:"coredns-76f75df574-s5hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad6b64c2dad", MAC:"76:c5:cc:e0:2c:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:43:59.693049 containerd[2129]: 2024-11-12 17:43:59.640 [INFO][5087] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044" Namespace="kube-system" Pod="coredns-76f75df574-s5hfm" WorkloadEndpoint="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:43:59.711425 containerd[2129]: time="2024-11-12T17:43:59.711323211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 17:43:59.768822 containerd[2129]: time="2024-11-12T17:43:59.768119751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:59.768822 containerd[2129]: time="2024-11-12T17:43:59.768657651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:59.768822 containerd[2129]: time="2024-11-12T17:43:59.768717363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.781224 containerd[2129]: time="2024-11-12T17:43:59.777037191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.843429 containerd[2129]: time="2024-11-12T17:43:59.842061759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-m2vdd,Uid:0ad34195-a82e-4064-b419-91cf3b5649a7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2\"" Nov 12 17:43:59.876809 containerd[2129]: time="2024-11-12T17:43:59.873999820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:43:59.876809 containerd[2129]: time="2024-11-12T17:43:59.876390016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:43:59.879156 containerd[2129]: time="2024-11-12T17:43:59.878769808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.880621 containerd[2129]: time="2024-11-12T17:43:59.879757696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:43:59.892752 containerd[2129]: time="2024-11-12T17:43:59.892280956Z" level=info msg="StopPodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\"" Nov 12 17:43:59.968543 containerd[2129]: time="2024-11-12T17:43:59.965655832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7rgfz,Uid:fe47042d-34e4-43bf-869d-d51013a31508,Namespace:kube-system,Attempt:1,} returns sandbox id \"cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7\"" Nov 12 17:43:59.982619 containerd[2129]: time="2024-11-12T17:43:59.981342004Z" level=info msg="CreateContainer within sandbox \"cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:44:00.049445 containerd[2129]: time="2024-11-12T17:44:00.049366380Z" level=info msg="CreateContainer within sandbox \"cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20b22d1e44fea67856cd89c03b3117c6c9b79cde1aa5dbbcef56a4cc735a2deb\"" Nov 12 17:44:00.056702 containerd[2129]: time="2024-11-12T17:44:00.054227004Z" level=info msg="StartContainer for \"20b22d1e44fea67856cd89c03b3117c6c9b79cde1aa5dbbcef56a4cc735a2deb\"" Nov 12 17:44:00.133145 containerd[2129]: time="2024-11-12T17:44:00.132603481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58c67c9d5-bpdzx,Uid:7cf10bc1-4c55-4746-b2f6-5b92d051ebc0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1\"" Nov 12 17:44:00.178888 containerd[2129]: time="2024-11-12T17:44:00.178821805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s5hfm,Uid:bc72a84b-bc38-4114-9563-0dae6b25af79,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044\"" Nov 12 17:44:00.191975 containerd[2129]: time="2024-11-12T17:44:00.191910205Z" level=info msg="CreateContainer within sandbox \"ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:44:00.227974 containerd[2129]: time="2024-11-12T17:44:00.226994821Z" level=info msg="CreateContainer within sandbox \"ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e58c4855027e643a5b673dc4e1477f52a364ac8426828876be17bb01f492cca2\"" Nov 12 17:44:00.230500 containerd[2129]: time="2024-11-12T17:44:00.230435041Z" level=info msg="StartContainer for \"e58c4855027e643a5b673dc4e1477f52a364ac8426828876be17bb01f492cca2\"" Nov 12 17:44:00.281279 systemd-networkd[1687]: cali9d03a0b6f23: Gained IPv6LL Nov 12 17:44:00.327613 containerd[2129]: time="2024-11-12T17:44:00.327294074Z" level=info msg="StartContainer for \"20b22d1e44fea67856cd89c03b3117c6c9b79cde1aa5dbbcef56a4cc735a2deb\" returns successfully" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.140 [INFO][5420] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.142 [INFO][5420] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" iface="eth0" netns="/var/run/netns/cni-a756972f-b507-e332-426a-ace07aa8675e" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.144 [INFO][5420] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" iface="eth0" netns="/var/run/netns/cni-a756972f-b507-e332-426a-ace07aa8675e" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.147 [INFO][5420] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" iface="eth0" netns="/var/run/netns/cni-a756972f-b507-e332-426a-ace07aa8675e" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.148 [INFO][5420] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.148 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.267 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.267 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.267 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.295 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.296 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.299 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:00.331582 containerd[2129]: 2024-11-12 17:44:00.310 [INFO][5420] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:00.335261 containerd[2129]: time="2024-11-12T17:44:00.333232706Z" level=info msg="TearDown network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" successfully" Nov 12 17:44:00.335261 containerd[2129]: time="2024-11-12T17:44:00.333294410Z" level=info msg="StopPodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" returns successfully" Nov 12 17:44:00.365151 containerd[2129]: time="2024-11-12T17:44:00.363453158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9dq4p,Uid:f12c29ab-8a74-4cf9-a191-0b1413424edc,Namespace:calico-system,Attempt:1,}" Nov 12 17:44:00.409338 systemd[1]: run-netns-cni\x2da756972f\x2db507\x2de332\x2d426a\x2dace07aa8675e.mount: Deactivated successfully. Nov 12 17:44:00.530949 containerd[2129]: time="2024-11-12T17:44:00.530648643Z" level=info msg="StartContainer for \"e58c4855027e643a5b673dc4e1477f52a364ac8426828876be17bb01f492cca2\" returns successfully" Nov 12 17:44:00.793391 systemd-networkd[1687]: calicc40397f12d: Gained IPv6LL Nov 12 17:44:00.797732 systemd-networkd[1687]: cali8bf1556fdad: Link UP Nov 12 17:44:00.801184 systemd-networkd[1687]: cali8bf1556fdad: Gained carrier Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.642 [INFO][5523] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0 csi-node-driver- calico-system f12c29ab-8a74-4cf9-a191-0b1413424edc 855 0 2024-11-12 17:43:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-95 csi-node-driver-9dq4p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8bf1556fdad [] []}} ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.642 [INFO][5523] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.701 [INFO][5549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" HandleID="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.723 [INFO][5549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" HandleID="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d920), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-95", "pod":"csi-node-driver-9dq4p", "timestamp":"2024-11-12 17:44:00.700984708 +0000 UTC"}, Hostname:"ip-172-31-27-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.723 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.723 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.723 [INFO][5549] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-95' Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.726 [INFO][5549] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.734 [INFO][5549] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.741 [INFO][5549] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.744 [INFO][5549] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.752 [INFO][5549] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.752 [INFO][5549] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.757 [INFO][5549] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700 Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.764 [INFO][5549] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.785 [INFO][5549] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.134/26] block=192.168.110.128/26 handle="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.785 [INFO][5549] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.134/26] handle="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" host="ip-172-31-27-95" Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.785 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:00.839720 containerd[2129]: 2024-11-12 17:44:00.785 [INFO][5549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.134/26] IPv6=[] ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" HandleID="k8s-pod-network.6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.843290 containerd[2129]: 2024-11-12 17:44:00.790 [INFO][5523] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f12c29ab-8a74-4cf9-a191-0b1413424edc", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"", Pod:"csi-node-driver-9dq4p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bf1556fdad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:00.843290 containerd[2129]: 2024-11-12 17:44:00.790 [INFO][5523] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.134/32] ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.843290 containerd[2129]: 2024-11-12 17:44:00.790 [INFO][5523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bf1556fdad ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.843290 containerd[2129]: 2024-11-12 17:44:00.797 [INFO][5523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.843290 containerd[2129]: 2024-11-12 17:44:00.798 [INFO][5523] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f12c29ab-8a74-4cf9-a191-0b1413424edc", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700", Pod:"csi-node-driver-9dq4p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bf1556fdad", MAC:"52:e3:c8:f8:c4:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:00.843290 containerd[2129]: 2024-11-12 17:44:00.830 [INFO][5523] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700" Namespace="calico-system" Pod="csi-node-driver-9dq4p" WorkloadEndpoint="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:00.857191 systemd-networkd[1687]: calieec26932ad2: Gained IPv6LL Nov 12 17:44:00.885362 containerd[2129]: time="2024-11-12T17:44:00.885122045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:44:00.886487 containerd[2129]: time="2024-11-12T17:44:00.886363613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:44:00.886487 containerd[2129]: time="2024-11-12T17:44:00.886431365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:44:00.886741 containerd[2129]: time="2024-11-12T17:44:00.886677845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:44:00.933112 systemd-networkd[1687]: calif14d8d6e697: Gained IPv6LL Nov 12 17:44:01.016107 systemd[1]: Started sshd@8-172.31.27.95:22-139.178.89.65:50380.service - OpenSSH per-connection server daemon (139.178.89.65:50380). Nov 12 17:44:01.023727 containerd[2129]: time="2024-11-12T17:44:01.023637709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9dq4p,Uid:f12c29ab-8a74-4cf9-a191-0b1413424edc,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700\"" Nov 12 17:44:01.229361 sshd[5608]: Accepted publickey for core from 139.178.89.65 port 50380 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:01.234840 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:01.248315 systemd-logind[2095]: New session 9 of user core. Nov 12 17:44:01.254079 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:44:01.333552 kubelet[3556]: I1112 17:44:01.332265 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s5hfm" podStartSLOduration=36.332201379 podStartE2EDuration="36.332201379s" podCreationTimestamp="2024-11-12 17:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:44:01.331095459 +0000 UTC m=+51.665325330" watchObservedRunningTime="2024-11-12 17:44:01.332201379 +0000 UTC m=+51.666431310" Nov 12 17:44:01.386595 kubelet[3556]: I1112 17:44:01.382049 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7rgfz" podStartSLOduration=36.381986439 podStartE2EDuration="36.381986439s" podCreationTimestamp="2024-11-12 17:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:44:01.378540531 +0000 UTC m=+51.712770606" watchObservedRunningTime="2024-11-12 17:44:01.381986439 +0000 UTC m=+51.716216322" Nov 12 17:44:01.432948 systemd-networkd[1687]: caliad6b64c2dad: Gained IPv6LL Nov 12 17:44:01.667193 sshd[5608]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:01.679618 systemd-logind[2095]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:44:01.681405 systemd[1]: sshd@8-172.31.27.95:22-139.178.89.65:50380.service: Deactivated successfully. Nov 12 17:44:01.690501 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:44:01.694981 systemd-logind[2095]: Removed session 9. Nov 12 17:44:02.279640 containerd[2129]: time="2024-11-12T17:44:02.279564855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:02.281558 containerd[2129]: time="2024-11-12T17:44:02.281295723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371" Nov 12 17:44:02.282718 containerd[2129]: time="2024-11-12T17:44:02.282631695Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:02.286669 containerd[2129]: time="2024-11-12T17:44:02.286561959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:02.288582 containerd[2129]: time="2024-11-12T17:44:02.288447928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 2.576527069s" Nov 12 17:44:02.289224 containerd[2129]: time="2024-11-12T17:44:02.288504400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\"" Nov 12 17:44:02.325692 containerd[2129]: time="2024-11-12T17:44:02.325630540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:44:02.388614 containerd[2129]: time="2024-11-12T17:44:02.387802072Z" level=info msg="CreateContainer within sandbox \"c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 17:44:02.422838 containerd[2129]: time="2024-11-12T17:44:02.422764300Z" level=info msg="CreateContainer within sandbox \"c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"585f689732225701e91d9e6b5160f4c17f3e59ff6986fc2bbc55802b1bfd06f3\"" Nov 12 17:44:02.426394 containerd[2129]: time="2024-11-12T17:44:02.426325816Z" level=info msg="StartContainer for \"585f689732225701e91d9e6b5160f4c17f3e59ff6986fc2bbc55802b1bfd06f3\"" Nov 12 17:44:02.618561 containerd[2129]: time="2024-11-12T17:44:02.616718693Z" level=info msg="StartContainer for \"585f689732225701e91d9e6b5160f4c17f3e59ff6986fc2bbc55802b1bfd06f3\" returns successfully" Nov 12 17:44:02.777809 systemd-networkd[1687]: cali8bf1556fdad: Gained IPv6LL Nov 12 17:44:03.411423 systemd[1]: run-containerd-runc-k8s.io-585f689732225701e91d9e6b5160f4c17f3e59ff6986fc2bbc55802b1bfd06f3-runc.vU2DvF.mount: Deactivated successfully. Nov 12 17:44:03.496120 kubelet[3556]: I1112 17:44:03.494988 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f549c5549-4bpts" podStartSLOduration=26.906779044 podStartE2EDuration="29.494899589s" podCreationTimestamp="2024-11-12 17:43:34 +0000 UTC" firstStartedPulling="2024-11-12 17:43:59.701367219 +0000 UTC m=+50.035597078" lastFinishedPulling="2024-11-12 17:44:02.28948774 +0000 UTC m=+52.623717623" observedRunningTime="2024-11-12 17:44:03.400666169 +0000 UTC m=+53.734896028" watchObservedRunningTime="2024-11-12 17:44:03.494899589 +0000 UTC m=+53.829129448" Nov 12 17:44:04.743561 containerd[2129]: time="2024-11-12T17:44:04.742729040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239" Nov 12 17:44:04.747368 containerd[2129]: time="2024-11-12T17:44:04.747208376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:04.760918 containerd[2129]: time="2024-11-12T17:44:04.760864112Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:04.763023 containerd[2129]: time="2024-11-12T17:44:04.762059252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 2.436358488s" Nov 12 17:44:04.763023 containerd[2129]: time="2024-11-12T17:44:04.762138380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:44:04.766571 containerd[2129]: time="2024-11-12T17:44:04.765936968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:44:04.767494 containerd[2129]: time="2024-11-12T17:44:04.767429024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:04.774734 containerd[2129]: time="2024-11-12T17:44:04.774666308Z" level=info msg="CreateContainer within sandbox \"eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:44:04.814178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371821455.mount: Deactivated successfully. Nov 12 17:44:04.819568 containerd[2129]: time="2024-11-12T17:44:04.819466064Z" level=info msg="CreateContainer within sandbox \"eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cb71702f89f13ba0a14aef4fd1e915fc9304136f68106ca6086a8fd23d806973\"" Nov 12 17:44:04.821359 containerd[2129]: time="2024-11-12T17:44:04.821119172Z" level=info msg="StartContainer for \"cb71702f89f13ba0a14aef4fd1e915fc9304136f68106ca6086a8fd23d806973\"" Nov 12 17:44:04.904189 systemd[1]: run-containerd-runc-k8s.io-cb71702f89f13ba0a14aef4fd1e915fc9304136f68106ca6086a8fd23d806973-runc.k8vDJt.mount: Deactivated successfully. Nov 12 17:44:05.066448 ntpd[2080]: Listen normally on 6 vxlan.calico 192.168.110.128:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 6 vxlan.calico 192.168.110.128:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 7 vxlan.calico [fe80::6406:17ff:fe64:a9e6%4]:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 8 calicc40397f12d [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 9 cali9d03a0b6f23 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 10 calieec26932ad2 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 11 calif14d8d6e697 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 12 caliad6b64c2dad [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 17:44:05.069796 ntpd[2080]: 12 Nov 17:44:05 ntpd[2080]: Listen normally on 13 cali8bf1556fdad [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 17:44:05.066603 ntpd[2080]: Listen normally on 7 vxlan.calico [fe80::6406:17ff:fe64:a9e6%4]:123 Nov 12 17:44:05.066727 ntpd[2080]: Listen normally on 8 calicc40397f12d [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 17:44:05.067828 ntpd[2080]: Listen normally on 9 cali9d03a0b6f23 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 17:44:05.067940 ntpd[2080]: Listen normally on 10 calieec26932ad2 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 17:44:05.068005 ntpd[2080]: Listen normally on 11 calif14d8d6e697 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 17:44:05.068075 ntpd[2080]: Listen normally on 12 caliad6b64c2dad [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 17:44:05.068993 ntpd[2080]: Listen normally on 13 cali8bf1556fdad [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 17:44:05.105738 containerd[2129]: time="2024-11-12T17:44:05.105573197Z" level=info msg="StartContainer for \"cb71702f89f13ba0a14aef4fd1e915fc9304136f68106ca6086a8fd23d806973\" returns successfully" Nov 12 17:44:05.142549 containerd[2129]: time="2024-11-12T17:44:05.140708466Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:05.143981 containerd[2129]: time="2024-11-12T17:44:05.143923026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 17:44:05.152662 containerd[2129]: time="2024-11-12T17:44:05.152583570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 386.572262ms" Nov 12 17:44:05.152922 containerd[2129]: time="2024-11-12T17:44:05.152888202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:44:05.154798 containerd[2129]: time="2024-11-12T17:44:05.154093830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 17:44:05.159601 containerd[2129]: time="2024-11-12T17:44:05.159534054Z" level=info msg="CreateContainer within sandbox \"e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:44:05.179610 containerd[2129]: time="2024-11-12T17:44:05.178817994Z" level=info msg="CreateContainer within sandbox \"e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70fd922530c2ecfa0fc12afa42a055c110997fbe16ce9f4f53d5e56313647e2f\"" Nov 12 17:44:05.180943 containerd[2129]: time="2024-11-12T17:44:05.180728778Z" level=info msg="StartContainer for \"70fd922530c2ecfa0fc12afa42a055c110997fbe16ce9f4f53d5e56313647e2f\"" Nov 12 17:44:05.406212 containerd[2129]: time="2024-11-12T17:44:05.406146931Z" level=info msg="StartContainer for \"70fd922530c2ecfa0fc12afa42a055c110997fbe16ce9f4f53d5e56313647e2f\" returns successfully" Nov 12 17:44:06.404269 kubelet[3556]: I1112 17:44:06.404216 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:44:06.439458 kubelet[3556]: I1112 17:44:06.439402 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58c67c9d5-m2vdd" podStartSLOduration=28.525937467 podStartE2EDuration="33.439344716s" podCreationTimestamp="2024-11-12 17:43:33 +0000 UTC" firstStartedPulling="2024-11-12 17:43:59.849177843 +0000 UTC m=+50.183407690" lastFinishedPulling="2024-11-12 17:44:04.762585092 +0000 UTC m=+55.096814939" observedRunningTime="2024-11-12 17:44:05.427557247 +0000 UTC m=+55.761787118" watchObservedRunningTime="2024-11-12 17:44:06.439344716 +0000 UTC m=+56.773574611" Nov 12 17:44:06.440552 kubelet[3556]: I1112 17:44:06.440077 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58c67c9d5-bpdzx" podStartSLOduration=28.422687439 podStartE2EDuration="33.44003462s" podCreationTimestamp="2024-11-12 17:43:33 +0000 UTC" firstStartedPulling="2024-11-12 17:44:00.136217905 +0000 UTC m=+50.470447764" lastFinishedPulling="2024-11-12 17:44:05.153565086 +0000 UTC m=+55.487794945" observedRunningTime="2024-11-12 17:44:06.439005164 +0000 UTC m=+56.773235035" watchObservedRunningTime="2024-11-12 17:44:06.44003462 +0000 UTC m=+56.774264491" Nov 12 17:44:06.696651 containerd[2129]: time="2024-11-12T17:44:06.692009097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:06.705835 containerd[2129]: time="2024-11-12T17:44:06.705761817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731" Nov 12 17:44:06.710888 containerd[2129]: time="2024-11-12T17:44:06.710704353Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:06.719678 containerd[2129]: time="2024-11-12T17:44:06.719604094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:06.721039 systemd[1]: Started sshd@9-172.31.27.95:22-139.178.89.65:50390.service - OpenSSH per-connection server daemon (139.178.89.65:50390). Nov 12 17:44:06.729501 containerd[2129]: time="2024-11-12T17:44:06.729426526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 1.575265784s" Nov 12 17:44:06.729729 containerd[2129]: time="2024-11-12T17:44:06.729696706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\"" Nov 12 17:44:06.736688 containerd[2129]: time="2024-11-12T17:44:06.734560318Z" level=info msg="CreateContainer within sandbox \"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 17:44:06.794310 containerd[2129]: time="2024-11-12T17:44:06.794133346Z" level=info msg="CreateContainer within sandbox \"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8184405394d3cf85ab8a48751e58bc83ed13ae4c6798c3a475b38464d8caf7b6\"" Nov 12 17:44:06.796552 containerd[2129]: time="2024-11-12T17:44:06.795899614Z" level=info msg="StartContainer for \"8184405394d3cf85ab8a48751e58bc83ed13ae4c6798c3a475b38464d8caf7b6\"" Nov 12 17:44:07.012451 sshd[5806]: Accepted publickey for core from 139.178.89.65 port 50390 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:07.023714 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:07.046870 systemd-logind[2095]: New session 10 of user core. Nov 12 17:44:07.053106 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:44:07.166898 containerd[2129]: time="2024-11-12T17:44:07.166784204Z" level=info msg="StartContainer for \"8184405394d3cf85ab8a48751e58bc83ed13ae4c6798c3a475b38464d8caf7b6\" returns successfully" Nov 12 17:44:07.179081 containerd[2129]: time="2024-11-12T17:44:07.177642464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 17:44:07.418985 kubelet[3556]: I1112 17:44:07.418949 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:44:07.481855 sshd[5806]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:07.499772 systemd[1]: sshd@9-172.31.27.95:22-139.178.89.65:50390.service: Deactivated successfully. Nov 12 17:44:07.523917 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:44:07.531108 systemd-logind[2095]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:44:07.538210 systemd-logind[2095]: Removed session 10. Nov 12 17:44:08.737781 containerd[2129]: time="2024-11-12T17:44:08.737710752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:08.742036 containerd[2129]: time="2024-11-12T17:44:08.741633780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360" Nov 12 17:44:08.745569 containerd[2129]: time="2024-11-12T17:44:08.744614748Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:08.751576 containerd[2129]: time="2024-11-12T17:44:08.751346808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:44:08.752661 containerd[2129]: time="2024-11-12T17:44:08.752181852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 1.574469896s" Nov 12 17:44:08.752661 containerd[2129]: time="2024-11-12T17:44:08.752242860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\"" Nov 12 17:44:08.757362 containerd[2129]: time="2024-11-12T17:44:08.757152984Z" level=info msg="CreateContainer within sandbox \"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 17:44:08.784426 containerd[2129]: time="2024-11-12T17:44:08.783551112Z" level=info msg="CreateContainer within sandbox \"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97a9392cbc321205c7a9ff882c8ae1afa0dc34b8a265b4079b95aa33a8e729a9\"" Nov 12 17:44:08.786640 containerd[2129]: time="2024-11-12T17:44:08.785612724Z" level=info msg="StartContainer for \"97a9392cbc321205c7a9ff882c8ae1afa0dc34b8a265b4079b95aa33a8e729a9\"" Nov 12 17:44:08.907598 containerd[2129]: time="2024-11-12T17:44:08.907534716Z" level=info msg="StartContainer for \"97a9392cbc321205c7a9ff882c8ae1afa0dc34b8a265b4079b95aa33a8e729a9\" returns successfully" Nov 12 17:44:09.122748 kubelet[3556]: I1112 17:44:09.122651 3556 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 17:44:09.124565 kubelet[3556]: I1112 17:44:09.122698 3556 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 17:44:09.921674 containerd[2129]: time="2024-11-12T17:44:09.921620209Z" level=info msg="StopPodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\"" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:09.992 [WARNING][5916] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1", Pod:"calico-apiserver-58c67c9d5-bpdzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif14d8d6e697", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:09.993 [INFO][5916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:09.993 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" iface="eth0" netns="" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:09.993 [INFO][5916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:09.993 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.028 [INFO][5923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.029 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.029 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.041 [WARNING][5923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.041 [INFO][5923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.043 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.049081 containerd[2129]: 2024-11-12 17:44:10.046 [INFO][5916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.050236 containerd[2129]: time="2024-11-12T17:44:10.049126762Z" level=info msg="TearDown network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" successfully" Nov 12 17:44:10.050236 containerd[2129]: time="2024-11-12T17:44:10.049172170Z" level=info msg="StopPodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" returns successfully" Nov 12 17:44:10.051647 containerd[2129]: time="2024-11-12T17:44:10.050897014Z" level=info msg="RemovePodSandbox for \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\"" Nov 12 17:44:10.051647 containerd[2129]: time="2024-11-12T17:44:10.051027238Z" level=info msg="Forcibly stopping sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\"" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.122 [WARNING][5941] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cf10bc1-4c55-4746-b2f6-5b92d051ebc0", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"e3db553e18be92b420c6a9263c032602beed656513eaebc7a4fa708ce0cb28a1", Pod:"calico-apiserver-58c67c9d5-bpdzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif14d8d6e697", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.123 [INFO][5941] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.123 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" iface="eth0" netns="" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.123 [INFO][5941] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.124 [INFO][5941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.173 [INFO][5947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.173 [INFO][5947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.173 [INFO][5947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.187 [WARNING][5947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.187 [INFO][5947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" HandleID="k8s-pod-network.438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--bpdzx-eth0" Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.189 [INFO][5947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.194654 containerd[2129]: 2024-11-12 17:44:10.191 [INFO][5941] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1" Nov 12 17:44:10.194654 containerd[2129]: time="2024-11-12T17:44:10.194132855Z" level=info msg="TearDown network for sandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" successfully" Nov 12 17:44:10.199989 containerd[2129]: time="2024-11-12T17:44:10.199935419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:44:10.201697 containerd[2129]: time="2024-11-12T17:44:10.200252279Z" level=info msg="RemovePodSandbox \"438dada15c6e588b05e68d12d23253fbb87a0a7abc8179f933f83ea2c3ce8cf1\" returns successfully" Nov 12 17:44:10.201697 containerd[2129]: time="2024-11-12T17:44:10.200986775Z" level=info msg="StopPodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\"" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.270 [WARNING][5965] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f12c29ab-8a74-4cf9-a191-0b1413424edc", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700", Pod:"csi-node-driver-9dq4p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bf1556fdad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.271 [INFO][5965] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.271 [INFO][5965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" iface="eth0" netns="" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.271 [INFO][5965] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.271 [INFO][5965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.311 [INFO][5971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.311 [INFO][5971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.311 [INFO][5971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.323 [WARNING][5971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.323 [INFO][5971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.326 [INFO][5971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.330951 containerd[2129]: 2024-11-12 17:44:10.328 [INFO][5965] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.332055 containerd[2129]: time="2024-11-12T17:44:10.331659359Z" level=info msg="TearDown network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" successfully" Nov 12 17:44:10.332055 containerd[2129]: time="2024-11-12T17:44:10.331700771Z" level=info msg="StopPodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" returns successfully" Nov 12 17:44:10.333129 containerd[2129]: time="2024-11-12T17:44:10.332815259Z" level=info msg="RemovePodSandbox for \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\"" Nov 12 17:44:10.333129 containerd[2129]: time="2024-11-12T17:44:10.332887031Z" level=info msg="Forcibly stopping sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\"" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.395 [WARNING][5989] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f12c29ab-8a74-4cf9-a191-0b1413424edc", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"6d06ba6aa8c176487553a4eb9d0de6e2125b6599e68e372cc504c6f460601700", Pod:"csi-node-driver-9dq4p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bf1556fdad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.395 [INFO][5989] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.395 [INFO][5989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" iface="eth0" netns="" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.396 [INFO][5989] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.396 [INFO][5989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.433 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.434 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.434 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.448 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.448 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" HandleID="k8s-pod-network.554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Workload="ip--172--31--27--95-k8s-csi--node--driver--9dq4p-eth0" Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.453 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.462790 containerd[2129]: 2024-11-12 17:44:10.460 [INFO][5989] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79" Nov 12 17:44:10.464616 containerd[2129]: time="2024-11-12T17:44:10.462747048Z" level=info msg="TearDown network for sandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" successfully" Nov 12 17:44:10.468780 containerd[2129]: time="2024-11-12T17:44:10.468703152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:44:10.469138 containerd[2129]: time="2024-11-12T17:44:10.468871152Z" level=info msg="RemovePodSandbox \"554dd4bc79b0bff9f13a17487d62f4b37a326dd3ba448e314e41a7a882be5f79\" returns successfully" Nov 12 17:44:10.469957 containerd[2129]: time="2024-11-12T17:44:10.469851144Z" level=info msg="StopPodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\"" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.532 [WARNING][6015] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe47042d-34e4-43bf-869d-d51013a31508", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7", Pod:"coredns-76f75df574-7rgfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieec26932ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.533 [INFO][6015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.533 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" iface="eth0" netns="" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.533 [INFO][6015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.533 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.569 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.569 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.569 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.585 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.585 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.588 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.592983 containerd[2129]: 2024-11-12 17:44:10.590 [INFO][6015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.595591 containerd[2129]: time="2024-11-12T17:44:10.593723413Z" level=info msg="TearDown network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" successfully" Nov 12 17:44:10.595591 containerd[2129]: time="2024-11-12T17:44:10.593763793Z" level=info msg="StopPodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" returns successfully" Nov 12 17:44:10.595591 containerd[2129]: time="2024-11-12T17:44:10.594880537Z" level=info msg="RemovePodSandbox for \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\"" Nov 12 17:44:10.595591 containerd[2129]: time="2024-11-12T17:44:10.594926869Z" level=info msg="Forcibly stopping sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\"" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.695 [WARNING][6040] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe47042d-34e4-43bf-869d-d51013a31508", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"cea21fd3b401773b4ba76439fbd26d6a0e2b21b8ab0c71b4c7dabf7c0ab201e7", Pod:"coredns-76f75df574-7rgfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieec26932ad2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.695 [INFO][6040] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.695 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" iface="eth0" netns="" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.695 [INFO][6040] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.696 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.744 [INFO][6050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.745 [INFO][6050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.745 [INFO][6050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.756 [WARNING][6050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.756 [INFO][6050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" HandleID="k8s-pod-network.371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--7rgfz-eth0" Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.759 [INFO][6050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.763649 containerd[2129]: 2024-11-12 17:44:10.761 [INFO][6040] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c" Nov 12 17:44:10.763649 containerd[2129]: time="2024-11-12T17:44:10.763375958Z" level=info msg="TearDown network for sandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" successfully" Nov 12 17:44:10.768617 containerd[2129]: time="2024-11-12T17:44:10.768555074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:44:10.768752 containerd[2129]: time="2024-11-12T17:44:10.768658574Z" level=info msg="RemovePodSandbox \"371a4eb62cea97edda4ebad431b25722f87dbd83f7de83b9f571298d0e4c659c\" returns successfully" Nov 12 17:44:10.769458 containerd[2129]: time="2024-11-12T17:44:10.769420046Z" level=info msg="StopPodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\"" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.842 [WARNING][6068] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad34195-a82e-4064-b419-91cf3b5649a7", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2", Pod:"calico-apiserver-58c67c9d5-m2vdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d03a0b6f23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.842 [INFO][6068] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.842 [INFO][6068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" iface="eth0" netns="" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.842 [INFO][6068] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.842 [INFO][6068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.878 [INFO][6074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.878 [INFO][6074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.878 [INFO][6074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.893 [WARNING][6074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.893 [INFO][6074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.896 [INFO][6074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:10.900913 containerd[2129]: 2024-11-12 17:44:10.898 [INFO][6068] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:10.901913 containerd[2129]: time="2024-11-12T17:44:10.900982730Z" level=info msg="TearDown network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" successfully" Nov 12 17:44:10.901913 containerd[2129]: time="2024-11-12T17:44:10.901021646Z" level=info msg="StopPodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" returns successfully" Nov 12 17:44:10.902127 containerd[2129]: time="2024-11-12T17:44:10.901829726Z" level=info msg="RemovePodSandbox for \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\"" Nov 12 17:44:10.902196 containerd[2129]: time="2024-11-12T17:44:10.902140634Z" level=info msg="Forcibly stopping sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\"" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:10.968 [WARNING][6093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0", GenerateName:"calico-apiserver-58c67c9d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad34195-a82e-4064-b419-91cf3b5649a7", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58c67c9d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"eb9697c7207724cb2b1a6627330778732e5d32e84e6e7363800c5b0f9f1218f2", Pod:"calico-apiserver-58c67c9d5-m2vdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d03a0b6f23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:10.968 [INFO][6093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:10.968 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" iface="eth0" netns="" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:10.968 [INFO][6093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:10.968 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.004 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.004 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.004 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.017 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.017 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" HandleID="k8s-pod-network.ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Workload="ip--172--31--27--95-k8s-calico--apiserver--58c67c9d5--m2vdd-eth0" Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.021 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:11.026005 containerd[2129]: 2024-11-12 17:44:11.023 [INFO][6093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52" Nov 12 17:44:11.026005 containerd[2129]: time="2024-11-12T17:44:11.025979531Z" level=info msg="TearDown network for sandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" successfully" Nov 12 17:44:11.032851 containerd[2129]: time="2024-11-12T17:44:11.032747879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:44:11.032971 containerd[2129]: time="2024-11-12T17:44:11.032941859Z" level=info msg="RemovePodSandbox \"ff445f8ddba861d32efe4913d875b830de7aa3e540447cb516841b6b6f843b52\" returns successfully" Nov 12 17:44:11.033861 containerd[2129]: time="2024-11-12T17:44:11.033814559Z" level=info msg="StopPodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\"" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.098 [WARNING][6119] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bc72a84b-bc38-4114-9563-0dae6b25af79", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044", Pod:"coredns-76f75df574-s5hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad6b64c2dad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.099 [INFO][6119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.099 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" iface="eth0" netns="" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.099 [INFO][6119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.099 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.138 [INFO][6125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.138 [INFO][6125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.139 [INFO][6125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.156 [WARNING][6125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.156 [INFO][6125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.160 [INFO][6125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:11.164927 containerd[2129]: 2024-11-12 17:44:11.162 [INFO][6119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.164927 containerd[2129]: time="2024-11-12T17:44:11.164903616Z" level=info msg="TearDown network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" successfully" Nov 12 17:44:11.166285 containerd[2129]: time="2024-11-12T17:44:11.164982372Z" level=info msg="StopPodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" returns successfully" Nov 12 17:44:11.167372 containerd[2129]: time="2024-11-12T17:44:11.166842552Z" level=info msg="RemovePodSandbox for \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\"" Nov 12 17:44:11.167372 containerd[2129]: time="2024-11-12T17:44:11.166897932Z" level=info msg="Forcibly stopping sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\"" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.234 [WARNING][6144] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bc72a84b-bc38-4114-9563-0dae6b25af79", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"ea1164492609b3d6532ca80e2c0863649719ef9c178a384bbd6548ec36dc6044", Pod:"coredns-76f75df574-s5hfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad6b64c2dad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.235 [INFO][6144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.235 [INFO][6144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" iface="eth0" netns="" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.235 [INFO][6144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.235 [INFO][6144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.282 [INFO][6150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.282 [INFO][6150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.282 [INFO][6150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.301 [WARNING][6150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.301 [INFO][6150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" HandleID="k8s-pod-network.d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Workload="ip--172--31--27--95-k8s-coredns--76f75df574--s5hfm-eth0" Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.304 [INFO][6150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:11.309315 containerd[2129]: 2024-11-12 17:44:11.306 [INFO][6144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d" Nov 12 17:44:11.312511 containerd[2129]: time="2024-11-12T17:44:11.310011492Z" level=info msg="TearDown network for sandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" successfully" Nov 12 17:44:11.315896 containerd[2129]: time="2024-11-12T17:44:11.315831852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:44:11.316380 containerd[2129]: time="2024-11-12T17:44:11.315933336Z" level=info msg="RemovePodSandbox \"d547140e2eb7e1ae071e0a5e6ec9971d8a06c4cc75a868089940ef3c6087b92d\" returns successfully" Nov 12 17:44:11.317291 containerd[2129]: time="2024-11-12T17:44:11.317120880Z" level=info msg="StopPodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\"" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.384 [WARNING][6168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0", GenerateName:"calico-kube-controllers-f549c5549-", Namespace:"calico-system", SelfLink:"", UID:"54f551c9-643f-46fd-bc59-e46d0d7f91ac", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f549c5549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c", Pod:"calico-kube-controllers-f549c5549-4bpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc40397f12d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.384 [INFO][6168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.384 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" iface="eth0" netns="" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.384 [INFO][6168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.384 [INFO][6168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.421 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.421 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.421 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.436 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.436 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.439 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:11.444459 containerd[2129]: 2024-11-12 17:44:11.441 [INFO][6168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.444459 containerd[2129]: time="2024-11-12T17:44:11.444290437Z" level=info msg="TearDown network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" successfully" Nov 12 17:44:11.444459 containerd[2129]: time="2024-11-12T17:44:11.444327733Z" level=info msg="StopPodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" returns successfully" Nov 12 17:44:11.446354 containerd[2129]: time="2024-11-12T17:44:11.445808761Z" level=info msg="RemovePodSandbox for \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\"" Nov 12 17:44:11.446354 containerd[2129]: time="2024-11-12T17:44:11.445870453Z" level=info msg="Forcibly stopping sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\"" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.517 [WARNING][6192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0", GenerateName:"calico-kube-controllers-f549c5549-", Namespace:"calico-system", SelfLink:"", UID:"54f551c9-643f-46fd-bc59-e46d0d7f91ac", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f549c5549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-95", ContainerID:"c6f7e258f12d735075dd1bdfa45d1da6b39a56bc0baa0b0ae17d2373ec87411c", Pod:"calico-kube-controllers-f549c5549-4bpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc40397f12d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.517 [INFO][6192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.517 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" iface="eth0" netns="" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.517 [INFO][6192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.517 [INFO][6192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.552 [INFO][6198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.552 [INFO][6198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.552 [INFO][6198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.565 [WARNING][6198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.565 [INFO][6198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" HandleID="k8s-pod-network.4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Workload="ip--172--31--27--95-k8s-calico--kube--controllers--f549c5549--4bpts-eth0" Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.568 [INFO][6198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:44:11.573395 containerd[2129]: 2024-11-12 17:44:11.570 [INFO][6192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b" Nov 12 17:44:11.575268 containerd[2129]: time="2024-11-12T17:44:11.575044202Z" level=info msg="TearDown network for sandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" successfully" Nov 12 17:44:11.579660 containerd[2129]: time="2024-11-12T17:44:11.579382646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:44:11.579660 containerd[2129]: time="2024-11-12T17:44:11.579476822Z" level=info msg="RemovePodSandbox \"4f362b2984a7125f190cc978d0eaf48a73380d8da521da090df373256236989b\" returns successfully" Nov 12 17:44:12.511098 systemd[1]: Started sshd@10-172.31.27.95:22-139.178.89.65:34016.service - OpenSSH per-connection server daemon (139.178.89.65:34016). Nov 12 17:44:12.697868 sshd[6205]: Accepted publickey for core from 139.178.89.65 port 34016 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:12.701089 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:12.708884 systemd-logind[2095]: New session 11 of user core. Nov 12 17:44:12.715481 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:44:12.982994 sshd[6205]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:12.990713 systemd[1]: sshd@10-172.31.27.95:22-139.178.89.65:34016.service: Deactivated successfully. Nov 12 17:44:12.996600 systemd-logind[2095]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:44:12.997121 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:44:13.001079 systemd-logind[2095]: Removed session 11. Nov 12 17:44:13.013111 systemd[1]: Started sshd@11-172.31.27.95:22-139.178.89.65:34028.service - OpenSSH per-connection server daemon (139.178.89.65:34028). Nov 12 17:44:13.192714 sshd[6220]: Accepted publickey for core from 139.178.89.65 port 34028 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:13.195427 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:13.202985 systemd-logind[2095]: New session 12 of user core. Nov 12 17:44:13.213009 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:44:13.539256 sshd[6220]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:13.550816 systemd[1]: sshd@11-172.31.27.95:22-139.178.89.65:34028.service: Deactivated successfully. Nov 12 17:44:13.560470 systemd-logind[2095]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:44:13.569043 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:44:13.581456 systemd[1]: Started sshd@12-172.31.27.95:22-139.178.89.65:34040.service - OpenSSH per-connection server daemon (139.178.89.65:34040). Nov 12 17:44:13.587704 systemd-logind[2095]: Removed session 12. Nov 12 17:44:13.762469 sshd[6231]: Accepted publickey for core from 139.178.89.65 port 34040 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:13.768767 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:13.776509 systemd-logind[2095]: New session 13 of user core. Nov 12 17:44:13.782227 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:44:14.029321 sshd[6231]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:14.036346 systemd[1]: sshd@12-172.31.27.95:22-139.178.89.65:34040.service: Deactivated successfully. Nov 12 17:44:14.044359 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:44:14.044714 systemd-logind[2095]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:44:14.047933 systemd-logind[2095]: Removed session 13. Nov 12 17:44:19.060278 systemd[1]: Started sshd@13-172.31.27.95:22-139.178.89.65:40380.service - OpenSSH per-connection server daemon (139.178.89.65:40380). Nov 12 17:44:19.238707 sshd[6274]: Accepted publickey for core from 139.178.89.65 port 40380 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:19.241582 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:19.250924 systemd-logind[2095]: New session 14 of user core. Nov 12 17:44:19.254083 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:44:19.495438 sshd[6274]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:19.502307 systemd[1]: sshd@13-172.31.27.95:22-139.178.89.65:40380.service: Deactivated successfully. Nov 12 17:44:19.510636 systemd-logind[2095]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:44:19.510923 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:44:19.516309 systemd-logind[2095]: Removed session 14. Nov 12 17:44:20.331567 kubelet[3556]: I1112 17:44:20.331252 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:44:20.369363 kubelet[3556]: I1112 17:44:20.365904 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9dq4p" podStartSLOduration=38.647537522 podStartE2EDuration="46.365843313s" podCreationTimestamp="2024-11-12 17:43:34 +0000 UTC" firstStartedPulling="2024-11-12 17:44:01.034336249 +0000 UTC m=+51.368566096" lastFinishedPulling="2024-11-12 17:44:08.75264204 +0000 UTC m=+59.086871887" observedRunningTime="2024-11-12 17:44:09.470998523 +0000 UTC m=+59.805228478" watchObservedRunningTime="2024-11-12 17:44:20.365843313 +0000 UTC m=+70.700073160" Nov 12 17:44:24.527143 systemd[1]: Started sshd@14-172.31.27.95:22-139.178.89.65:40388.service - OpenSSH per-connection server daemon (139.178.89.65:40388). Nov 12 17:44:24.714616 sshd[6292]: Accepted publickey for core from 139.178.89.65 port 40388 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:24.719003 sshd[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:24.727774 systemd-logind[2095]: New session 15 of user core. Nov 12 17:44:24.735038 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:44:24.995013 sshd[6292]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:25.001196 systemd[1]: sshd@14-172.31.27.95:22-139.178.89.65:40388.service: Deactivated successfully. Nov 12 17:44:25.002180 systemd-logind[2095]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:44:25.011889 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:44:25.014204 systemd-logind[2095]: Removed session 15. Nov 12 17:44:30.025007 systemd[1]: Started sshd@15-172.31.27.95:22-139.178.89.65:60336.service - OpenSSH per-connection server daemon (139.178.89.65:60336). Nov 12 17:44:30.207548 sshd[6309]: Accepted publickey for core from 139.178.89.65 port 60336 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:30.210441 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:30.219434 systemd-logind[2095]: New session 16 of user core. Nov 12 17:44:30.229163 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:44:30.485759 sshd[6309]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:30.491919 systemd[1]: sshd@15-172.31.27.95:22-139.178.89.65:60336.service: Deactivated successfully. Nov 12 17:44:30.499287 systemd-logind[2095]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:44:30.499670 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:44:30.505886 systemd-logind[2095]: Removed session 16. Nov 12 17:44:30.606510 kubelet[3556]: I1112 17:44:30.606278 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:44:35.524691 systemd[1]: Started sshd@16-172.31.27.95:22-139.178.89.65:60344.service - OpenSSH per-connection server daemon (139.178.89.65:60344). Nov 12 17:44:35.716051 sshd[6331]: Accepted publickey for core from 139.178.89.65 port 60344 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:35.720408 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:35.736252 systemd-logind[2095]: New session 17 of user core. Nov 12 17:44:35.743223 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:44:36.049456 sshd[6331]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:36.057957 systemd[1]: sshd@16-172.31.27.95:22-139.178.89.65:60344.service: Deactivated successfully. Nov 12 17:44:36.064735 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:44:36.066246 systemd-logind[2095]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:44:36.078235 systemd-logind[2095]: Removed session 17. Nov 12 17:44:36.082252 systemd[1]: Started sshd@17-172.31.27.95:22-139.178.89.65:60360.service - OpenSSH per-connection server daemon (139.178.89.65:60360). Nov 12 17:44:36.260751 sshd[6345]: Accepted publickey for core from 139.178.89.65 port 60360 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:36.263786 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:36.274234 systemd-logind[2095]: New session 18 of user core. Nov 12 17:44:36.282272 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:44:36.902223 sshd[6345]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:36.909225 systemd[1]: sshd@17-172.31.27.95:22-139.178.89.65:60360.service: Deactivated successfully. Nov 12 17:44:36.920960 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:44:36.926795 systemd-logind[2095]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:44:36.944195 systemd[1]: Started sshd@18-172.31.27.95:22-139.178.89.65:60368.service - OpenSSH per-connection server daemon (139.178.89.65:60368). Nov 12 17:44:36.949311 systemd-logind[2095]: Removed session 18. Nov 12 17:44:37.139545 sshd[6382]: Accepted publickey for core from 139.178.89.65 port 60368 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:37.140898 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:37.153322 systemd-logind[2095]: New session 19 of user core. Nov 12 17:44:37.159853 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:44:40.713177 sshd[6382]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:40.727852 systemd[1]: sshd@18-172.31.27.95:22-139.178.89.65:60368.service: Deactivated successfully. Nov 12 17:44:40.730737 systemd-logind[2095]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:44:40.747592 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:44:40.775600 systemd-logind[2095]: Removed session 19. Nov 12 17:44:40.788044 systemd[1]: Started sshd@19-172.31.27.95:22-139.178.89.65:47814.service - OpenSSH per-connection server daemon (139.178.89.65:47814). Nov 12 17:44:41.011052 sshd[6401]: Accepted publickey for core from 139.178.89.65 port 47814 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:41.014089 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:41.026954 systemd-logind[2095]: New session 20 of user core. Nov 12 17:44:41.040319 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:44:41.713272 sshd[6401]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:41.722582 systemd[1]: sshd@19-172.31.27.95:22-139.178.89.65:47814.service: Deactivated successfully. Nov 12 17:44:41.739580 systemd-logind[2095]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:44:41.740387 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:44:41.754054 systemd[1]: Started sshd@20-172.31.27.95:22-139.178.89.65:47820.service - OpenSSH per-connection server daemon (139.178.89.65:47820). Nov 12 17:44:41.758190 systemd-logind[2095]: Removed session 20. Nov 12 17:44:41.937334 sshd[6436]: Accepted publickey for core from 139.178.89.65 port 47820 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:41.940116 sshd[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:41.948800 systemd-logind[2095]: New session 21 of user core. Nov 12 17:44:41.956556 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 17:44:42.229305 sshd[6436]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:42.237146 systemd[1]: sshd@20-172.31.27.95:22-139.178.89.65:47820.service: Deactivated successfully. Nov 12 17:44:42.248121 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 17:44:42.254841 systemd-logind[2095]: Session 21 logged out. Waiting for processes to exit. Nov 12 17:44:42.257952 systemd-logind[2095]: Removed session 21. Nov 12 17:44:47.258094 systemd[1]: Started sshd@21-172.31.27.95:22-139.178.89.65:55222.service - OpenSSH per-connection server daemon (139.178.89.65:55222). Nov 12 17:44:47.449754 sshd[6469]: Accepted publickey for core from 139.178.89.65 port 55222 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:47.453873 sshd[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:47.472809 systemd-logind[2095]: New session 22 of user core. Nov 12 17:44:47.484579 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 17:44:47.755851 sshd[6469]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:47.763029 systemd[1]: sshd@21-172.31.27.95:22-139.178.89.65:55222.service: Deactivated successfully. Nov 12 17:44:47.773045 systemd-logind[2095]: Session 22 logged out. Waiting for processes to exit. Nov 12 17:44:47.773149 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 17:44:47.781208 systemd-logind[2095]: Removed session 22. Nov 12 17:44:52.784003 systemd[1]: Started sshd@22-172.31.27.95:22-139.178.89.65:55236.service - OpenSSH per-connection server daemon (139.178.89.65:55236). Nov 12 17:44:52.961997 sshd[6486]: Accepted publickey for core from 139.178.89.65 port 55236 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:52.964650 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:52.972387 systemd-logind[2095]: New session 23 of user core. Nov 12 17:44:52.980039 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 17:44:53.225953 sshd[6486]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:53.232019 systemd-logind[2095]: Session 23 logged out. Waiting for processes to exit. Nov 12 17:44:53.232820 systemd[1]: sshd@22-172.31.27.95:22-139.178.89.65:55236.service: Deactivated successfully. Nov 12 17:44:53.242266 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 17:44:53.244662 systemd-logind[2095]: Removed session 23. Nov 12 17:44:58.257001 systemd[1]: Started sshd@23-172.31.27.95:22-139.178.89.65:58258.service - OpenSSH per-connection server daemon (139.178.89.65:58258). Nov 12 17:44:58.434009 sshd[6503]: Accepted publickey for core from 139.178.89.65 port 58258 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:58.437123 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:58.444493 systemd-logind[2095]: New session 24 of user core. Nov 12 17:44:58.456243 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 17:44:58.694864 sshd[6503]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:58.701388 systemd[1]: sshd@23-172.31.27.95:22-139.178.89.65:58258.service: Deactivated successfully. Nov 12 17:44:58.709612 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 17:44:58.711374 systemd-logind[2095]: Session 24 logged out. Waiting for processes to exit. Nov 12 17:44:58.714032 systemd-logind[2095]: Removed session 24. Nov 12 17:45:03.726038 systemd[1]: Started sshd@24-172.31.27.95:22-139.178.89.65:58270.service - OpenSSH per-connection server daemon (139.178.89.65:58270). Nov 12 17:45:03.906140 sshd[6517]: Accepted publickey for core from 139.178.89.65 port 58270 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:45:03.909653 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:45:03.917725 systemd-logind[2095]: New session 25 of user core. Nov 12 17:45:03.925157 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 17:45:04.172867 sshd[6517]: pam_unix(sshd:session): session closed for user core Nov 12 17:45:04.177663 systemd[1]: sshd@24-172.31.27.95:22-139.178.89.65:58270.service: Deactivated successfully. Nov 12 17:45:04.186929 systemd-logind[2095]: Session 25 logged out. Waiting for processes to exit. Nov 12 17:45:04.188097 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 17:45:04.190084 systemd-logind[2095]: Removed session 25. Nov 12 17:45:09.202001 systemd[1]: Started sshd@25-172.31.27.95:22-139.178.89.65:56610.service - OpenSSH per-connection server daemon (139.178.89.65:56610). Nov 12 17:45:09.384571 sshd[6551]: Accepted publickey for core from 139.178.89.65 port 56610 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:45:09.387281 sshd[6551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:45:09.395249 systemd-logind[2095]: New session 26 of user core. Nov 12 17:45:09.401204 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 17:45:09.651677 sshd[6551]: pam_unix(sshd:session): session closed for user core Nov 12 17:45:09.658828 systemd[1]: sshd@25-172.31.27.95:22-139.178.89.65:56610.service: Deactivated successfully. Nov 12 17:45:09.666537 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 17:45:09.671075 systemd-logind[2095]: Session 26 logged out. Waiting for processes to exit. Nov 12 17:45:09.672793 systemd-logind[2095]: Removed session 26. Nov 12 17:45:14.692271 systemd[1]: Started sshd@26-172.31.27.95:22-139.178.89.65:56626.service - OpenSSH per-connection server daemon (139.178.89.65:56626). Nov 12 17:45:14.867832 sshd[6586]: Accepted publickey for core from 139.178.89.65 port 56626 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:45:14.870625 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:45:14.878627 systemd-logind[2095]: New session 27 of user core. Nov 12 17:45:14.888162 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 17:45:15.133696 sshd[6586]: pam_unix(sshd:session): session closed for user core Nov 12 17:45:15.144443 systemd[1]: sshd@26-172.31.27.95:22-139.178.89.65:56626.service: Deactivated successfully. Nov 12 17:45:15.153190 systemd-logind[2095]: Session 27 logged out. Waiting for processes to exit. Nov 12 17:45:15.154723 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 17:45:15.157920 systemd-logind[2095]: Removed session 27. Nov 12 17:45:20.163132 systemd[1]: Started sshd@27-172.31.27.95:22-139.178.89.65:45006.service - OpenSSH per-connection server daemon (139.178.89.65:45006). Nov 12 17:45:20.334428 sshd[6606]: Accepted publickey for core from 139.178.89.65 port 45006 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:45:20.337348 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:45:20.360135 systemd-logind[2095]: New session 28 of user core. Nov 12 17:45:20.363067 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 17:45:20.606872 sshd[6606]: pam_unix(sshd:session): session closed for user core Nov 12 17:45:20.614129 systemd[1]: sshd@27-172.31.27.95:22-139.178.89.65:45006.service: Deactivated successfully. Nov 12 17:45:20.614609 systemd-logind[2095]: Session 28 logged out. Waiting for processes to exit. Nov 12 17:45:20.622485 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 17:45:20.626272 systemd-logind[2095]: Removed session 28. Nov 12 17:45:34.501105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302-rootfs.mount: Deactivated successfully. Nov 12 17:45:34.505019 containerd[2129]: time="2024-11-12T17:45:34.501732202Z" level=info msg="shim disconnected" id=9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302 namespace=k8s.io Nov 12 17:45:34.505019 containerd[2129]: time="2024-11-12T17:45:34.501817138Z" level=warning msg="cleaning up after shim disconnected" id=9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302 namespace=k8s.io Nov 12 17:45:34.505019 containerd[2129]: time="2024-11-12T17:45:34.501840634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:45:34.770260 kubelet[3556]: I1112 17:45:34.769944 3556 scope.go:117] "RemoveContainer" containerID="9180d690a5ad48a26b8035ae9c229ef7c057efe074466b0430f6b9c95a3e1302" Nov 12 17:45:34.774662 containerd[2129]: time="2024-11-12T17:45:34.774605723Z" level=info msg="CreateContainer within sandbox \"1222554d87b9c11d8e3563eb512f60da0ac570fc873a5c96950592034c18c260\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 12 17:45:34.792256 containerd[2129]: time="2024-11-12T17:45:34.791842559Z" level=info msg="CreateContainer within sandbox \"1222554d87b9c11d8e3563eb512f60da0ac570fc873a5c96950592034c18c260\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f4e5c436b1203f3e6b80ca6c94af62e4f5b6bcd40f22c2034839ecf7bc2c8a69\"" Nov 12 17:45:34.795719 containerd[2129]: time="2024-11-12T17:45:34.795454451Z" level=info msg="StartContainer for \"f4e5c436b1203f3e6b80ca6c94af62e4f5b6bcd40f22c2034839ecf7bc2c8a69\"" Nov 12 17:45:34.890760 containerd[2129]: time="2024-11-12T17:45:34.890581523Z" level=info msg="StartContainer for \"f4e5c436b1203f3e6b80ca6c94af62e4f5b6bcd40f22c2034839ecf7bc2c8a69\" returns successfully" Nov 12 17:45:35.231407 containerd[2129]: time="2024-11-12T17:45:35.231254985Z" level=info msg="shim disconnected" id=d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f namespace=k8s.io Nov 12 17:45:35.231407 containerd[2129]: time="2024-11-12T17:45:35.231329457Z" level=warning msg="cleaning up after shim disconnected" id=d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f namespace=k8s.io Nov 12 17:45:35.231407 containerd[2129]: time="2024-11-12T17:45:35.231349053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:45:35.499080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f-rootfs.mount: Deactivated successfully. Nov 12 17:45:35.775920 kubelet[3556]: I1112 17:45:35.775538 3556 scope.go:117] "RemoveContainer" containerID="d6f133b728144de30f580fe584ce7651f0ed2260029132e9b6c6f1512c25a44f" Nov 12 17:45:35.782638 containerd[2129]: time="2024-11-12T17:45:35.782130144Z" level=info msg="CreateContainer within sandbox \"225f362e1e3fb9b5f9f2be29168d55a5ee26110923a22db24fa594c2c10d9a06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 12 17:45:35.799024 containerd[2129]: time="2024-11-12T17:45:35.798914040Z" level=info msg="CreateContainer within sandbox \"225f362e1e3fb9b5f9f2be29168d55a5ee26110923a22db24fa594c2c10d9a06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ab8480a18dd747637ae3eac7f601c12bd0ddc05aa04aad8227b08b68a24824af\"" Nov 12 17:45:35.802742 containerd[2129]: time="2024-11-12T17:45:35.799741884Z" level=info msg="StartContainer for \"ab8480a18dd747637ae3eac7f601c12bd0ddc05aa04aad8227b08b68a24824af\"" Nov 12 17:45:35.939423 containerd[2129]: time="2024-11-12T17:45:35.939348145Z" level=info msg="StartContainer for \"ab8480a18dd747637ae3eac7f601c12bd0ddc05aa04aad8227b08b68a24824af\" returns successfully" Nov 12 17:45:40.206713 containerd[2129]: time="2024-11-12T17:45:40.206369114Z" level=info msg="shim disconnected" id=e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a namespace=k8s.io Nov 12 17:45:40.206713 containerd[2129]: time="2024-11-12T17:45:40.206442458Z" level=warning msg="cleaning up after shim disconnected" id=e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a namespace=k8s.io Nov 12 17:45:40.206713 containerd[2129]: time="2024-11-12T17:45:40.206462174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:45:40.212266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a-rootfs.mount: Deactivated successfully. Nov 12 17:45:40.801256 kubelet[3556]: I1112 17:45:40.801162 3556 scope.go:117] "RemoveContainer" containerID="e3522c5d908593e4ff75b835d279b9047755c92b97211b70248273e4259a562a" Nov 12 17:45:40.804958 containerd[2129]: time="2024-11-12T17:45:40.804838649Z" level=info msg="CreateContainer within sandbox \"c55232f7db04b405181c34eecd0603955d0de01b7a20512b25a2a15992e91e17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 12 17:45:40.829311 containerd[2129]: time="2024-11-12T17:45:40.829131653Z" level=info msg="CreateContainer within sandbox \"c55232f7db04b405181c34eecd0603955d0de01b7a20512b25a2a15992e91e17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b316e675d6d6c697b4cbaadc03492d751aa5bfae656eed36d6d213e7267511aa\"" Nov 12 17:45:40.830207 containerd[2129]: time="2024-11-12T17:45:40.830028101Z" level=info msg="StartContainer for \"b316e675d6d6c697b4cbaadc03492d751aa5bfae656eed36d6d213e7267511aa\"" Nov 12 17:45:40.953833 containerd[2129]: time="2024-11-12T17:45:40.953659170Z" level=info msg="StartContainer for \"b316e675d6d6c697b4cbaadc03492d751aa5bfae656eed36d6d213e7267511aa\" returns successfully" Nov 12 17:45:43.004262 kubelet[3556]: E1112 17:45:43.004208 3556 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 12 17:45:53.004585 kubelet[3556]: E1112 17:45:53.004496 3556 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"